DBILITY

독거 가능성 100% 노후에 라면값이라도 하게 센스를 발휘합시다!😅
Please click on the ad so that I can pay for ramen in my old age!
点击一下广告,让老后吃个泡面钱吧!
老後にラーメン代だけでもするように広告を一回クリックしてください。

flume kafka sink test ( 풀럼 카프카 싱크 테스트 ) 본문

bigdata/kafka

flume kafka sink test ( 풀럼 카프카 싱크 테스트 )

DBILITY 2018. 5. 7. 19:15
반응형

이전에 테스트했던 source에 sink를 추가해 보자.

http://www.dbility.com/269

 

flume doc의 그림을 보고 memoryChannel2를 추가하고, kafkaSink에 연결하였다.

kafka-manager에서도 Topic은 추가할 수 있다.

Stream이 들어오는 중에 Topic에 partition을 하나씩 추가하고 reassign해보니 잘 된다.broker3개에 모두 할당했다.

partition이 하나일때는 순서가 보장된다.그러나 partition이 늘어나면 당연한 얘기지만 partition내에서만 보장된다.

kafka-console-consumer에서 확인해 보니, seq가 뒤죽박죽이다.

collector나 server agent의 channel로 file도 테스트 해보니 file이 더 낫겠다.데이터 유실이 발생하면 안되니까

 

현재 분석용 데이터(기준치,공차상하한)를 생성하는 source를 만들어 테스트 중인데, kafka consumer까지 테스트 되었다.

이렇게 하는게 맞는지 알 수 없으나, 그냥 응용하자.

ELK스택에 filebeat이 flume같은 역할을 한다고 한다.

그렇다면, 아래 그림의 Client-Side에 filebeat이 위치하고 Server-Side에 Logstash가 존재해야 하나? 아마도 그럴 것 같다.

음...Collector에서 바로 KAFKA로 보내는게 더 좋을까? channel에 kafka를 이용할 수 있다.

보통 운영계가 End-To-End 형태인 경우 한쪽이라도 문제가 생기면, 데이터의 유실이 발생할 수 밖에 없다.

중간에 KAFKA를 두면 유실가능성이 현저하게 낮아지겠다.

kafka는 data 기본 retention기간이 아래와 같이 7일로 설정되어 있다.

kakfa conf의 Server.properties -> Log Retention Policy -> log.retention.hours=168

"The Kafka cluster durably persists all published records—whether or not they have been consumed—using a configurable retention period"

kafka의 data저장 디스크는 별도로 지정하는게 좋겠다.최소 RAID-1, 무난하게 RAID-5구성하면 되려나..

 

 

flume-conf.properties

agent.sources = avroSrc
agent.sources.avroSrc.type = avro
agent.sources.avroSrc.bind = 0.0.0.0
agent.sources.avroSrc.port = 4141
agent.sources.avroSrc.channels = memoryChannel memoryChannel2

agent.channels = memoryChannel memoryChannel2
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 1000000
agent.channels.memoryChannel2.type = memory
agent.channels.memoryChannel2.capacity = 1000000


agent.sinks = hdfsSink kafkaSink
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.hdfs.fileType = DataStream
agent.sinks.hdfsSink.hdfs.path = hdfs://hadoop-cluster/flume/events/%y%m%d/%H%M/%S
agent.sinks.hdfsSink.hdfs.writFormat = text
agent.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
agent.sinks.hdfsSink.hdfs.filePrefix = rndGen_%[FQDN]_%y%m%d%H%M%S
agent.sinks.hdfsSink.hdfs.fileSuffix = .log

#Sink
#agent.sinks = loggerSink
#agent.sinks.loggerSink.channel = memoryChannel
#agent.sinks.loggerSink.type = logger
#agent.sinks.loggerSink.maxBytesToLog = 10000

agent.sinks.kafkaSink.channel = memoryChannel2
agent.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.kafka.topic = randomTopic
agent.sinks.kafkaSink.kafka.bootstrap.servers = big-slave2:9092,big-slave3:9092,big-slave4:9092
agent.sinks.kafkaSink.kafka.flumeBatchSize = 10
agent.sinks.kafkaSink.kafka.producer.acks = 1
agent.sinks.kafkaSink.kafka.producer.linger.ms = 1

flume agent

[hadoop@big-master apache-flume-1.8.0-bin]$ ./bin/flume-ng agent -n agent -c conf -f conf/flume-conf.properties &
#flume 시작 후 로그 확인 뭔가 화려하군...
08 May 2018 21:16:17,059 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start:62)  - Configuration provider starting
08 May 2018 21:16:17,068 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:134)  - Reloading configuration file:conf/flume-conf.properties
08 May 2018 21:16:17,078 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,078 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,079 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,079 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,079 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,080 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,080 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,080 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930)  - Added sinks: hdfsSink kafkaSink Agent: agent
08 May 2018 21:16:17,081 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,081 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,081 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,081 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,082 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,082 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,084 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:kafkaSink
08 May 2018 21:16:17,085 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:hdfsSink
08 May 2018 21:16:17,102 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140)  - Post-validation flume configuration contains configuration for agents: [agent]
08 May 2018 21:16:17,103 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:147)  - Creating channels
08 May 2018 21:16:17,111 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:42)  - Creating instance of channel memoryChannel type memory
08 May 2018 21:16:17,118 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:201)  - Created channel memoryChannel
08 May 2018 21:16:17,118 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:42)  - Creating instance of channel memoryChannel2 type memory
08 May 2018 21:16:17,118 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:201)  - Created channel memoryChannel2
08 May 2018 21:16:17,119 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:41)  - Creating instance of source avroSrc, type avro
08 May 2018 21:16:17,140 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:42)  - Creating instance of sink: kafkaSink, type: org.apache.flume.sink.kafka.KafkaSink
08 May 2018 21:16:17,147 INFO  [conf-file-poller-0] (org.apache.flume.sink.kafka.KafkaSink.configure:314)  - Using the static topic randomTopic. This may be overridden by event headers
08 May 2018 21:16:17,164 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:42)  - Creating instance of sink: hdfsSink, type: hdfs
08 May 2018 21:16:17,176 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:116)  - Channel memoryChannel connected to [avroSrc, hdfsSink]
08 May 2018 21:16:17,176 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:116)  - Channel memoryChannel2 connected to [avroSrc, kafkaSink]
08 May 2018 21:16:17,185 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:137)  - Starting new configuration:{ sourceRunners:{avroSrc=EventDrivenSourceRunner: { source:Avro source avroSrc: { bindAddress: 0.0.0.0, port: 4141 } }} sinkRunners:{kafkaSink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@11399f82 counterGroup:{ name:null counters:{} } }, hdfsSink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@190941cc counterGroup:{ name:null counters:{} } }} channels:{memoryChannel=org.apache.flume.channel.MemoryChannel{name: memoryChannel}, memoryChannel2=org.apache.flume.channel.MemoryChannel{name: memoryChannel2}} }
08 May 2018 21:16:17,185 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:144)  - Starting Channel memoryChannel
08 May 2018 21:16:17,186 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:144)  - Starting Channel memoryChannel2
08 May 2018 21:16:17,275 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119)  - Monitored counter group for type: CHANNEL, name: memoryChannel2: Successfully registered new MBean.
08 May 2018 21:16:17,275 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119)  - Monitored counter group for type: CHANNEL, name: memoryChannel: Successfully registered new MBean.
08 May 2018 21:16:17,276 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95)  - Component type: CHANNEL, name: memoryChannel2 started
08 May 2018 21:16:17,276 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95)  - Component type: CHANNEL, name: memoryChannel started
08 May 2018 21:16:17,277 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:171)  - Starting Sink kafkaSink
08 May 2018 21:16:17,278 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:171)  - Starting Sink hdfsSink
08 May 2018 21:16:17,280 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:182)  - Starting Source avroSrc
08 May 2018 21:16:17,281 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119)  - Monitored counter group for type: SINK, name: hdfsSink: Successfully registered new MBean.
08 May 2018 21:16:17,281 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95)  - Component type: SINK, name: hdfsSink started
08 May 2018 21:16:17,281 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:234)  - Starting Avro source avroSrc: { bindAddress: 0.0.0.0, port: 4141 }...
08 May 2018 21:16:17,314 INFO  [lifecycleSupervisor-1-2] (org.apache.kafka.common.config.AbstractConfig.logAll:165)  - ProducerConfig values:
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [big-slave2:9092, big-slave3:9092, big-slave4:9092]
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.StringSerializer
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.keystore.type = JKS
        ssl.trustmanager.algorithm = PKIX
        block.on.buffer.full = false
        ssl.key.password = null
        max.block.ms = 60000
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.truststore.password = null
        max.in.flight.requests.per.connection = 5
        metrics.num.samples = 2
        client.id =
        ssl.endpoint.identification.algorithm = null
        ssl.protocol = TLS
        request.timeout.ms = 30000
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        acks = 1
        batch.size = 16384
        ssl.keystore.location = null
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.truststore.type = JKS
        security.protocol = PLAINTEXT
        retries = 0
        max.request.size = 1048576
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        send.buffer.bytes = 131072
        linger.ms = 1

08 May 2018 21:16:17,390 INFO  [lifecycleSupervisor-1-2] (org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>:82)  - Kafka version : 0.9.0.1
08 May 2018 21:16:17,390 INFO  [lifecycleSupervisor-1-2] (org.apache.kafka.common.utils.AppInfoParser$AppInfo.<init>:83)  - Kafka commitId : 23c69d62a0cabf06
08 May 2018 21:16:17,391 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119)  - Monitored counter group for type: SINK, name: kafkaSink: Successfully registered new MBean.
08 May 2018 21:16:17,392 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95)  - Component type: SINK, name: kafkaSink started
08 May 2018 21:16:17,702 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119)  - Monitored counter group for type: SOURCE, name: avroSrc: Successfully registered new MBean.
08 May 2018 21:16:17,703 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95)  - Component type: SOURCE, name: avroSrc started
08 May 2018 21:16:17,705 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.source.AvroSource.start:260)  - Avro source avroSrc started.

kafka-console-consumer ( partition 1개 일때 )

[kafka@big-slave4 bin]$ ./kafka-console-consumer.sh --bootstrap-server big-slave2:9092,big-slave3:9092,big-slave4:9092 \
--topic randomTopic --group randomTopicConsumer --from-beginning
567,8x47-tj0wmg3h-bt9s-jvvj,8vg,3y1cxlb85je72u,42ne9c0ob4lcajsxad7v,2018-05-08 21:29:34
568,c8lo-4ean8xq7-aonp-ro3e,k4n,1fb7sod807xcyl,2imuw5d0q7j77bm2avlb,2018-05-08 21:29:34
569,a410-3071fmcy-eqdg-dkug,hnf,cs3jj9ohyme8g7,luam2prhluv4zkhw4qn1,2018-05-08 21:29:34
570,9pq9-qxvia9z2-2l89-ypcw,gpz,lxujkgbc9vgmvq,4ml8sdgce8m005q1yjk8,2018-05-08 21:29:39
571,8dst-zmb60daj-zsgs-up7l,f1f,1iyfi4kdi6hknm,xhlh6n1hsoeu1ysoo20v,2018-05-08 21:29:39
572,ccbs-hk0w2ph8-qpq4-s5go,vv5,0ga9v49z0xgqe3,cybo7bw90vnthhmtey8y,2018-05-08 21:29:39
573,htym-58w20l9h-3f13-4db9,2xw,0qzjt9hifevhd2,kbjzbqkr0ovaj8e38u1o,2018-05-08 21:29:39
574,y4oo-euvenmzs-ilxe-4d8h,prc,huom3k5sgmt9fg,9ewtbgro660txwyu6mwx,2018-05-08 21:29:39
575,juy4-dlh4yp5s-lk9m-c1zo,mox,9jk0rp1rlpoolv,o3v6ncm4pc9idj3aty0i,2018-05-08 21:29:39
576,l0bp-xb410czl-47s2-b6qp,lpy,1hd5ca2zkn2nsi,fwzl4a1fcxezt0o09ap1,2018-05-08 21:29:39
577,0mqo-pcq523ok-fkx4-eta4,7c9,ljbam7gtkb9fmo,dbjczk7sl9hwu4grss9h,2018-05-08 21:29:39
578,4t04-g6vzfvnr-2luf-rwcm,ezy,k7flleysc37kn2,w1dk698dpqdesxbfdan5,2018-05-08 21:29:39
579,fswx-wp2ge3xn-ief0-pqe0,i28,hg3ps5jxrttchm,tbmk5kqdouviw2uun205,2018-05-08 21:29:39
580,2o68-xa7s0qa3-ct6g-bjbc,ou8,nuaku85j9bvc95,2rnxhch5vdnbjb8pv07b,2018-05-08 21:29:44
581,ip1y-qtqkyq1v-pccp-6sls,x4j,iappssjep8p581,vz2iwombev6wy6i2mu90,2018-05-08 21:29:44
582,xfc7-hor34vfu-lka2-b1y8,cyx,b0iz89quwrxqfp,y14aue5v2i3ypgz0aukk,2018-05-08 21:29:44
583,mi1u-hvec7ln0-6314-v2u8,zkc,hzohkgeydl0v5y,m73bfpuecmhmfpm5ad19,2018-05-08 21:29:44
584,hvd7-vyjkok25-tm8f-c3np,2da,yfyj6a3x1c723v,7bnithfyrr81mn7k9y0l,2018-05-08 21:29:44
585,wgki-5pv861sg-9zls-uxif,bhd,m29saftsvdte4c,t0qss6kx6bvnnsuetnmj,2018-05-08 21:29:44
586,ykw1-jtavhd37-qsl1-8x9i,x12,cnkpasqt5ak2bc,s535vm6rfmt9apxuabc0,2018-05-08 21:29:44
587,inxs-xgole4kl-x8t1-are0,0dw,nfixwzqd42s77h,3cqsbs5fg194566c4exi,2018-05-08 21:29:44
588,i3no-w2dgjnsi-h4bt-hktr,xvx,vwzke620ivpbow,ykoup8fzuq577j1azc6n,2018-05-08 21:29:44
589,s6wt-aczjil8b-qdb0-yqhw,kgg,t9dcsdbd81wj5k,lilzcct734lpa8p8k7zb,2018-05-08 21:29:44
590,ndat-q5ibryjh-6x1m-57pj,0b5,b0rtsvg1kj4dbx,w52q7jyoimniij7e0jbl,2018-05-08 21:29:49
591,vll6-hk1bckif-shc5-fo1k,06d,d503mbt18k8bnl,6pq6w4qrmvh9fn3zggtv,2018-05-08 21:29:49
592,gzjk-qu9df2yg-sdk6-p3sj,ofg,qrksf0x3lu67gy,5o3qapmjniv9wt7bboqc,2018-05-08 21:29:49
593,8l4a-7mqbe49x-by2g-7yvd,u3a,jnjkb7h5waoyyi,q3yurprzy191i0so3m9o,2018-05-08 21:29:49
594,8048-t2g03i43-fo8t-wbld,cd2,rljexrov7mp6xi,5nlers6a50pintul6vjr,2018-05-08 21:29:49
595,y591-se4ft9qs-0u4r-1iui,93w,3i41sjzjr0713m,yjbmh0t9wzcoy3bh03d2,2018-05-08 21:29:49
596,uvpp-a5qtull1-q0wf-nzbh,j02,04tps06tdj7s3t,fsq8rh2vi485917nslhc,2018-05-08 21:29:49
597,h5ou-hudfxqq4-e5lj-5t44,fka,wgnc1z5gmb22r0,qgjyeumed42jwgixkj4g,2018-05-08 21:29:49
598,e4cd-djupctvn-zlx9-svg7,758,g3u4h01modi5yy,40d8nximtaz1k1946js9,2018-05-08 21:29:49
599,mfcv-34kofom0-yxv1-d1tb,2jz,by6k4u2gyvqg29,z9brkt3tjm7u4rueyto0,2018-05-08 21:29:49
600,ib7p-drid6a57-wabh-xmri,vaa,fiyin16k7u48wj,ytuum0dlkzadv3rm4rsa,2018-05-08 21:29:54
601,8gju-2efhtg9q-cos3-8f88,ab9,pug2intc56w5r3,dco8b6x9u8of5mv4og7t,2018-05-08 21:29:54
602,ul0o-0wmpul2n-kdjv-1asa,gt2,tzl4nphvyhexed,guzhola4eh7i1nihk9xh,2018-05-08 21:29:54
603,k5q2-0kag0r23-04dr-cu3j,d2w,9jmmhhc6r70e4m,rhw8sq6vpzyffuwozirk,2018-05-08 21:29:54
604,tt8z-vtsusu5n-vjmo-2pe9,w1y,8bcfw8h0108ibb,z9hoyr3pcn5bwcsamhy5,2018-05-08 21:29:54
605,854o-1kgfdslw-ia5y-vl1l,kyw,5tn2acic36dl0m,iz2354pn3l1s0itytc24,2018-05-08 21:29:54
606,rjth-oz03jz38-4ptn-ilvu,0h8,yvqks1d1s5hrv0,gg38d52s6hzpo53splc6,2018-05-08 21:29:54
607,kswu-rpxgxb4x-cqcy-ckqy,ekz,8c46sxj6911k02,88gu5xdqop7kp2sareby,2018-05-08 21:29:54
608,3isp-ethcs63v-hs9e-yzqp,pnj,szz4sqksuf5stq,d5adbqt8lkyug5050o58,2018-05-08 21:29:54
609,h4mx-m1bjtjez-c9nz-wyn0,g4i,41qgw8k3c8prpr,hywwn35kqmvnrfjsdlwy,2018-05-08 21:29:54

 

kafka-console-consumer ( partition 3개일때 )

4770,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.203,3.283,3.417,2018-05-10 21:58:54.026
4773,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.147,3.283,3.417,2018-05-10 21:58:54.026
4776,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.702,1.597,1.663,2018-05-10 21:58:54.026
4779,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.358,3.283,3.417,2018-05-10 21:58:54.026
4771,randomGen-192.168.100.18,Z01,789XXXXXXXXXXXXXXXX,2.21,2.260,2.166,2.254,2018-05-10 21:58:54.026
4774,randomGen-192.168.100.18,Z01,789XXXXXXXXXXXXXXXX,2.21,2.206,2.166,2.254,2018-05-10 21:58:54.026
4777,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.370,3.283,3.417,2018-05-10 21:58:54.026
4772,randomGen-192.168.100.18,Z01,789XXXXXXXXXXXXXXXX,2.21,2.240,2.166,2.254,2018-05-10 21:58:54.026
4775,randomGen-192.168.100.18,Z01,789XXXXXXXXXXXXXXXX,2.21,2.130,2.166,2.254,2018-05-10 21:58:54.026
4778,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.358,3.283,3.417,2018-05-10 21:58:54.026
4780,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.831,1.597,1.663,2018-05-10 21:58:59.026
4783,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.466,1.597,1.663,2018-05-10 21:58:59.026
4786,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.431,3.283,3.417,2018-05-10 21:58:59.026
4789,randomGen-192.168.100.18,Z01,789XXXXXXXXXXXXXXXX,2.21,2.446,2.166,2.254,2018-05-10 21:58:59.026
4781,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.631,1.597,1.663,2018-05-10 21:58:59.026
4784,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.306,3.283,3.417,2018-05-10 21:58:59.026
4787,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.381,3.283,3.417,2018-05-10 21:58:59.026
4782,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.591,1.597,1.663,2018-05-10 21:58:59.026
4785,randomGen-192.168.100.18,Y03,456XXXXXXXXXXXXXXXX,1.63,1.509,1.597,1.663,2018-05-10 21:58:59.026
4788,randomGen-192.168.100.18,X02,123XXXXXXXXXXXXXXXX,3.35,3.415,3.283,3.417,2018-05-10 21:58:59.026

 

반응형

'bigdata > kafka' 카테고리의 다른 글

ksql 설치  (0) 2018.05.17
producer filebeat 테스트  (0) 2018.05.12
kafka-manager 설치  (0) 2018.05.07
kafka 설치  (0) 2018.05.06
Comments