kafka整合strom代码实例讲解

By | 2018年12月17日

strom整合kafka关键就是以strom中的spout当作kafka的消费者来接收生产者传入的数据。

画一个简单的图:


好了,接下来我们直接上代码!

1,先写一个main方法,作为消费者来接受生产者数据。

package cn.itcast.storm.topology;

import storm.kafka.BrokerHosts;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.spout.SchemeAsMultiScheme;
import backtype.storm.topology.TopologyBuilder;
import backtype.storm.tuple.Fields;
import cn.itcast.storm.bolt.WordSpliter;
import cn.itcast.storm.bolt.WriterBolt;
import cn.itcast.storm.spout.MessageScheme;
	
public class KafkaTopo {

	public static void main(String[] args) throws Exception {
		String topic = "kafkaStrom"; //执行要消费的topic
		String zkRoot = "/kafka-storm";
		String spoutId = "KafkaSpout";
		//这里相当于一个消费者,所以不知道topic所在的broker是那台,我们只需要指定zk即可, 从zk中取的元数据查看 topic所在broker,从而拿到topic中生产者的数据
		//实际开发生产者一般用flume采集数据,之后文章介绍 flume整合kafka
		BrokerHosts brokerHosts = new ZkHosts("weekend01:2181,weekend02:2181,weekend03:2181"); 
		SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, topic, zkRoot, spoutId);
		spoutConfig.forceFromStart = true;//从头开始读
		spoutConfig.scheme = new SchemeAsMultiScheme(new MessageScheme());
		TopologyBuilder builder = new TopologyBuilder(); 
		//设置一个spout用来从kaflka消息队列中读取数据并发送给下一级的bolt组件,此处用的spout组件并非自定义的,而是storm中已经开发好的KafkaSpout 
		//我们只需要传入strom需要的东西即可,kafka为我们继承了类 kafakaSpout()
		builder.setSpout("KafkaSpout", new KafkaSpout(spoutConfig));  
		builder.setBolt("word-spilter", new WordSpliter()).shuffleGrouping(spoutId);
		//产生4个文件,以uuid命名,指定了4个bolt同时处理数据,但是每一行数据只会让同一个bolt处理。
		builder.setBolt("writer", new WriterBolt(), 4).fieldsGrouping("word-spilter", new Fields("word"));
		Config conf = new Config();
		conf.setNumWorkers(4);
		conf.setNumAckers(0);
		conf.setDebug(false);
		
		LocalCluster cluster = new LocalCluster();
		//LocalCluster用来将topology提交到本地模拟器运行,方便开发调试
		cluster.submitTopology("WordCount", conf, builder.createTopology());
		
		//提交topology到storm集群中运行
		//StormSubmitter.submitTopology("kafkaStrom-topo", conf, builder.createTopology());
	}
}

2,实现scheme接口并重写里面方法,主要作用是将生产者送来的数据反序列化,然后指定输出每行信息的字段名称,

然后把这个类放到new KafkaSpout(spoutConfig)方法里面,这是kafka整合好的方法,他相当与storm中的spout组件。

package cn.itcast.storm.spout;
import java.io.UnsupportedEncodingException;
import java.util.List;
import backtype.storm.spout.Scheme;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values;
public class MessageScheme implements Scheme {
	
	private static final long serialVersionUID = 8423372426211017613L;
	@Override
	public List<Object> deserialize(byte[] bytes) {
			try {
				String msg = new String(bytes, "UTF-8");
				return new Values(msg); 
			} catch (UnsupportedEncodingException e) {
				e.printStackTrace();
			}
			return null;
	}
	@Override
	public Fields getOutputFields() {
		return new Fields("msg");
	}

}

3,写一个bolt组件处理拆分spout中发来的数据

package cn.itcast.storm.bolt;
import org.apache.commons.lang.StringUtils;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseBasicBolt;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Tuple;
import backtype.storm.tuple.Values;

public class WordSpliter extends BaseBasicBolt {
	private static final long serialVersionUID = -5653803832498574866L;
	@Override
	public void execute(Tuple input, BasicOutputCollector collector) {
		String line = input.getString(0);
		String[] words = line.split(" ");
		for (String word : words) {
			word = word.trim();
			if (StringUtils.isNotBlank(word)) {
				word = word.toLowerCase();
				collector.emit(new Values(word));
			}
		}
	}
	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("word"));
	}
}

4,写一个bolt组件将数据放到本地或者hdfs集群中去(这里我们写到了本地,存到hdfs集群的话还得启动7台虚拟机,偷个懒。。)

package cn.itcast.storm.bolt;
import java.io.FileWriter;
import java.io.IOException;
import java.util.Map;
import java.util.UUID;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseBasicBolt;
import backtype.storm.tuple.Tuple;
/**
 * 将数据写入文件	
 * @author duanhaitao@itcast.cn
 *				
 */
public class WriterBolt extends BaseBasicBolt {

	private static final long serialVersionUID = -6586283337287975719L;
	private FileWriter writer = null;
	@Override
	public void prepare(Map stormConf, TopologyContext context) {
		try {
			writer = new FileWriter("c:\\storm-kafka\\" + "wordcount"+UUID.randomUUID().toString());
		} catch (IOException e) {
			throw new RuntimeException(e);
		}
	}
	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
	}
	@Override
	public void execute(Tuple input, BasicOutputCollector collector) {
		String s = input.getString(0);
		try {
			writer.write(s);
			writer.write("\n");
			writer.flush();
		} catch (IOException e) {
			throw new RuntimeException(e);
		}
	}
}

到这里代码写完了,接下里我们测试下。

首先保证我们的zk集群处于启动状态,并且上面的kafka也处于启动状态

启动main方法(相当于kafka的消费者)

创建topic

bin/kafka-topics.sh –create –zookeeper weekend01:2181 –replication-factor 3 –partitions 1 –topic mytest

启动生产者代码(之前文章有些)或者在虚拟机上以命令行的模式启动一个生产者也行

命令行模式启动

bin/kafka-console-producer.sh –broker-list weekend01:9092 –topic mytest

然后我们随便输入点东西如:

hello kafka strom 

111 222 333

aaa bbb ccc

my name is xxx

最好你去本地目录下查看是否产生消费者处理后的目录(目录是上面bolt组件中定义的)


可以看到该目录下产生了4个文件,因为我们在消费者mian方法中指定了bolt执行的线程数为4个,并以uuid命名为。


谢谢观看,

发表评论