Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Thursday, July 9, 2020

Create P2P connections with WiFi Direct in android

What is Android Wi-Fi P2p APIs?
It allows nearby devices to connect and communicate among themselves without needing to connect to a network or hotspot.

Advantages over traditional Wi-Fi ad-hoc networking?
  • Wifi-Direct supports WPA2(Wifi Protected Access) encryption.
  • Android doesn't support Wi-Fi ad-hoc mode.

Let's implement : 

Set up application permissions :
Set up a broadcast receiver :
With the help of broadcast receiver we will be able to listen various phases of connections between 2 devices.
Constructor of broadcast receiver class : 
The onReceive() method of broadcast will look like this : 
Create an activity and register broadcast receiver : 
1- Create intent Filter : 
 
WifiP2pManager.WIFI_P2P_STATE_CHANGED_ACTION : Indicates a change in the Wi-Fi P2P status.
WifiP2pManager.WIFI_P2P_PEERS_CHANGED_ACTION : Indicates a change in the list of available peers.
WifiP2pManager.WIFI_P2P_CONNECTION_CHANGED_ACTION : Indicates the state of Wi-Fi P2P connectivity has changed.
WifiP2pManager.WIFI_P2P_THIS_DEVICE_CHANGED_ACTION : Indicates this device's details have changed.


2- Register receiver in OnCreate : 
Listen to broadcast receiver :

1- Now when you run the application you will find that your control will get to the WIFI_P2P_STATE_CHANGED_ACTION, so this the point where you will start finding the peers(nearby you) if your WIFI state is enabled.
So in your broadcast receiver check the wifi-state and start listen for peers if it is enabled.


And your startFindPeers method will look like this on Activity : 


2- As soon as you successfully discover the nearby peers the WIFI_P2P_PEERS_CHANGED_ACTION in your broadcast receiver class gets trigerred, so this is the point to get the list of all the available devices.
So, inside your broadcast onReceive method and add the following code in WIFI_P2P_PEERS_CHANGED_ACTION and register your peer callback : 


This requestPeer() method is implicitly provided by WifiP2pManager class but this peerListListener is custom and is used to as a callback for peerList : 
PeerListener will be like this : 

3- Now if the call is successful then you will get the list of all available devices with various other parameters in onPeersAvailable, iterate through the list and select one device to connect.
Now next step is to get the peer configurations of the selected item in the peerList and connect with peers using deviceAddress : 

   

4- Once we get the successfull connection with the peer the WIFI_P2P_CONNECTION_CHANGED_ACTION is trigerred and this is the point to get the connection info from the connectionListener callback.
 The connection info can be get by passing connection callback as a parameter in requestConnectionInfo() method.

And the connectionInfoListener will look like this :


Great our peers are now connected to each other and can transfer data between themselves.

5- To transfer the data you have to create a client and server class based on available connection info.
Let's take a look at server class  : 
This is the client class : 
So finally, the sockets are now connected to each other and data transfer can be done easily using input and output streams of the socket.

We are a seasoned SaaS app development company that provides full-scale software solutions using next-gen technologies. Our team of developers is experienced in using the latest tools and SDKs to build performance-driven and user-friendly software solutions for multiple platforms. We also specialize in providing end-to-end DevOps solutions and cloud app development services for varied business requirements. For technical assistance, contact us at info@oodlestechnologies.com and share your requirements.

Monday, June 15, 2020

An Introduction To Kafka Architecture and Kafka as a Service

Kafka and Kafka as a Service

Apache Kafka is a fast and scalable Publish/Subscribe messaging platform. It enables the communication between producers and consumers using messaging-based topics. It allows producers to write records into Kafka that can be read by one or more consumers per consumer group. It's becoming a solution for big data and microservices applications. It is being used by several companies to solve the problem of real-time processing.

A Broker is like a Kafka server that runs in a Kafka Cluster. Kafka Brokers form a cluster. The Kafka Cluster consists of many Kafka Brokers on several servers. Brokers often refer to more of a logical system or as Kafka as a whole.

It uses ZooKeeper to manage the cluster. ZooKeeper is used to coordinate the brokers/cluster topology. ZooKeeper gets used for leadership elections for Broker Topic Partition Leaders.

The Kafka architecture consists of four main APIs on which Kafka runs.
  1. Producer API:
This API allows an application to publish a stream of records to one or more Kafka topics.

Consumer API

It allows an application to subscribe to one or more topics. It also allows the application to process the stream of records that are published to the topic(s).

Streams API

This streams API allows an application to act as a stream processor. The application consumes an input stream from one or more topics and produces an output stream to one or more output topics thereby transforming input streams to output streams.

Connector API

This connector API builds reusable producers and consumers that connect Kafka topics to applications and data systems.

Kafka Cluster Architecture


Kafka architecture can also be described as a cluster with different components. 

Kafka Broker

A Kafka cluster often consists of many brokers. One Kafka broker can be used to handle thousands of reads and writes per second. However, since brokers are stateless they use Zookeeper to maintain the cluster state.

Kafka ZooKeeper

This uses ZooKeeper to manage and coordinate Kafka brokers in the cluster. The ZooKeeper notifies the producers and consumers when a new broker enters the Kafka cluster or if a broker fails in the cluster. On being informed about the failure of a broker, the producer and consumer decide how to act and start coordinating with other active brokers. 

Kafka Producers

This component in the Kafka cluster architecture pushes the data to brokers. It sends messages to the broker at a speed that the broker can handle. Therefore, it doesn’t wait for acknowledgments from the broker. It can also search for and send messages to new brokers exactly when they start.

Kafka Consumers

Since brokers are stateless, Kafka consumers maintain the number of messages that have been consumed already and this can be achieved using the partition offset. The consumer remembers each message offset which is an assurance that it has consumed all the messages before it. 



Kafka cluster setup via Docker

version: '2'

services:

  zookeeper:

    image: wurstmeister/zookeeper

    ports:

      - "2181:2181"

  kafka-1:

    image: wurstmeister/kafka

    ports:

      - "9095:9092"

    environment:

      KAFKA_ADVERTISED_HOST_NAME: kafka1

      KAFKA_ADVERTISED_PORT: 9095

      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

      KAFKA_LOG_DIRS: /kafka/logs

      KAFKA_BROKER_ID: 500

      KAFKA_offsets_topic_replication_factor: 3

    volumes:

      - /var/run/docker.sock:/var/run/docker.sock

      - kafka_data/500:/kafka


  kafka-2:

    image: wurstmeister/kafka

    ports:

      - "9096:9092"

    environment:

      KAFKA_ADVERTISED_HOST_NAME: kafka2

      KAFKA_ADVERTISED_PORT: 9096

      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

      KAFKA_LOG_DIRS: /kafka/logs

      KAFKA_BROKER_ID: 501

      KAFKA_offsets_topic_replication_factor: 3

    volumes:

      - /var/run/docker.sock:/var/run/docker.sock

      - kafka_data/501:/kafka


  kafka-3:

    image: wurstmeister/kafka

    ports:

      - "9097:9092"

    environment:

      KAFKA_ADVERTISED_HOST_NAME: kafka3

      KAFKA_ADVERTISED_PORT: 9097

      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

      KAFKA_LOG_DIRS: /kafka/logs

      KAFKA_BROKER_ID: 502

      KAFKA_offsets_topic_replication_factor: 3

    volumes:

      - /var/run/docker.sock:/var/run/docker.sock

      - kafka_data/502:/kafka

Start The Cluster

Simply start the cluster using the docker-compose command from the current directory:
$ docker-compose up -d

We can quickly check which nodes are part of the cluster by running a command against zookeeper:
$ docker-compose exec zookeeper ./bin/zkCli.sh ls /brokers/ids

And that’s it. We’ve now configured a kafka cluster up and running. We can also test failover cases or other settings by simply bringing one kafka node down and seeing how the clients react.

Self-managed Kafka Services

We can also use Cloud-based self-managed kafka service on different cloud service providers. On aws,  They have Fully managed, highly available, and secure Apache Kafka service like Amazon MSK (Amazon Managed Streaming for Apache Kafka)

Friday, December 2, 2016