Spark Streaming(1)Spark Streaming Concept and

By | 2019年1月31日
Spark Streaming(1)Spark Streaming Concept and Zookeeper/Kafka on Local

I was using Spark for more than 1 year now, from 0.7 to 0.9 on production. Recently I came back to Spark and considering upgrade the version to 1.3.1. There are a lot of new things and good idea after 0.9.

1. Introduction
Standalone Cluster
master machine is a single point.
We have options #1 use zookeeper to manage several masters,
spark.deploy.recoveryMode, default value is NONE, should be changed to ZOOKEEPER
spark.deploy.zookeeper.url, eg,,
spark.deploy.zookeeper.dir, eg, /spark

For spark job, standalone cluster will have all the jars and files in the working directory, we need to set spark.worker.cleanup.appDataTtl to clean them. But YARN cluster will automatically do that.

Cluster Job Schedule
standalone cluster - FIFO, spark.cores.max and spark.deploy.defaultCores and others to set how much resource one application can use.
YARN - —num-executor, —executor-memory and etc.

Spark Streaming
source from kafka, flume, twitter, zeromq, kinesis

original DStream time1 time2 time3 time4 time5
windowed DStream window time1 window time2

ssc.checkpoint(hdfsPath), usually checkpoint time will be 5 - 10 times sliding


receive the streaming in parallel,
val numstreams = 5
val kafkaStreams = (1 to numStreams).map { i => KafkaUtils.createStream(…) }
val unifiedStream = streamingContext.union(kafkaStreams)

Recovery the Task from Checkpoint
def functionToCreateContext(): StreamingContext = {
val ssc = new StreamingContext(...)
val lines = sac.socketTextStream(...)

val context = StreamingContext.getOrCreate()checkpointDirectory, functionToCreateContext _)
context . ...
context. start()

2. Zookeeper

Install zookeeper
> wget
Unzip that, Place it in the working directory, add the bin to the path.

Set up the configuration
> cp conf/zoo_sample.cfg conf/zoo.cfg

Start the Server
> start zoo.cfg

Check status
> status


2294 QuorumPeerMain
2330 Jps

Connect from client
> -server localhost:2181

3. Kafka
Download the binary with version 8.2.1
> wget
Place that in the working directory and Add that to path

Command to start kafka
> config/

Create a topic
> bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".

List the topic
> bin/ --list --zookeeper localhost:2181

Producer sending some messages
> bin/ --broker-list localhost:9092 --topic test

Start a Consumer
> bin/ --zookeeper localhost:2181 --topic test --from-beginning


spark streaming