DBILITY

독거 가능성 100% 노후에 라면값이라도 하게 센스를 발휘합시다!😅
Please click on the ad so that I can pay for ramen in my old age!
点击一下广告,让老后吃个泡面钱吧!
老後にラーメン代だけでもするように広告を一回クリックしてください。

kafka 설치 본문

bigdata/kafka

kafka 설치

DBILITY 2018. 5. 6. 21:11
반응형

 

kafka cluster 로 big-slave2, big-slave3, big-slave4 에 3개를 설치해 본다.

ssh인증부분은 없어도 될거 같은데...일단 복사문제, systemd 실행시 실행유저가 필요하니 추가함.

#kafka cluster가 될 3대에 모두 추가
[root@big-slave2 ~]# useradd kafka ; echo 'kafka' | passwd --stdin kafka ; usermod -G datagroup kafka
Changing password for user kafka.
passwd: all authentication tokens updated successfully.
[root@big-slave2 ~]# ssh big-slave3 "useradd kafka ; echo 'kafka' | passwd --stdin kafka ; usermod -G datagroup kafka"
Changing password for user kafka.
passwd: all authentication tokens updated successfully.
[root@big-slave2 ~]# ssh big-slave4 "useradd kafka ; echo 'kafka' | passwd --stdin kafka ; usermod -G datagroup kafka"
Changing password for user kafka.
passwd: all authentication tokens updated successfully.

[root@big-slave2 ~]# su - kafka
[kafka@big-slave2 ~]$ ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/kafka/.ssh/id_rsa):
Created directory '/home/kafka/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/kafka/.ssh/id_rsa.
Your public key has been saved in /home/kafka/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CIAUIEishxz3SMRWvgTLheth0E690SHZJLyoV66ZjkI kafka@big-slave2
The key's randomart image is:
+---[RSA 4096]----+
|O=*o===o.        |
|++.X==oo         |
|o.Oo*o+          |
|o..B.*..         |
| .+ +.. S        |
| E o .           |
|. . +            |
|. .+             |
|....             |
+----[SHA256]-----+
[kafka@big-slave2 ~]$ cd .ssh/
[kafka@big-slave2 .ssh]$ ls
id_rsa  id_rsa.pub
[kafka@big-slave2 .ssh]$ cp id_rsa.pub ./authorized_keys
[kafka@big-slave2 .ssh]$ ssh-copy-id kafka@big-slave2
[kafka@big-slave2 .ssh]$ ssh-copy-id kafka@big-slave3
[kafka@big-slave2 .ssh]$ ssh-copy-id kafka@big-slave4
[kafka@big-slave2 .ssh]$ exit
logout
[root@big-slave2 ~]# cd /usr/local/src/
[root@big-slave2 src]# wget http://apache.tt.co.kr/kafka/1.1.0/kafka_2.11-1.1.0.tgz
--2018-05-06 19:37:46--  http://apache.tt.co.kr/kafka/1.1.0/kafka_2.11-1.1.0.tgz
Resolving apache.tt.co.kr (apache.tt.co.kr)... 211.47.69.77
Connecting to apache.tt.co.kr (apache.tt.co.kr)|211.47.69.77|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 56969154 (54M) [application/x-gzip]
Saving to: ‘kafka_2.11-1.1.0.tgz’

100%[========================================================================>] 56,969,154  2.44MB/s   in 20s

2018-05-06 19:38:06 (2.69 MB/s) - ‘kafka_2.11-1.1.0.tgz’ saved [56969154/56969154]
[root@big-slave2 src]# tar -zxpf kafka_2.11-1.1.0.tgz
[root@big-slave2 src]# cd kafka_2.11-1.1.0
[root@big-slave2 kafka_2.11-1.1.0]# mkdir data
[root@big-slave2 kafka_2.11-1.1.0]# ls
bin  config  libs  LICENSE  dataNOTICE  site-docs
[root@big-slave2 kafka_2.11-1.1.0]# vi ./config/server.properties
broker.id=1
log.dirs=/kafka/data
zookeeper.connect=big-master:2181,big-slave1:2181,big-slave2:2181/kafka-cluster
:wq!

[root@big-slave2 kafka_2.11-1.1.0]# cd ..
[root@big-slave2 src]# ls
build-mariadb     kafka_2.11-1.1.0.tgz  mariadb-10.2.14-compiled.tar.gz
kafka_2.11-1.1.0  mariadb-10.2.14       mariadb-10.2.14.tar.gz
[root@big-slave2 src]# cp -rf kafka_2.11-1.1.0 /bigdata/
[root@big-slave2 src]# chown -R kafka.kafka /bigdata/kafka_2.11-1.1.0/

[root@big-slave2 src]# rsync -az /bigdata/kafka_2.11-1.1.0/ big-slave3:/bigdata/kafka_2.11-1.1.0/
root@big-slave3's password:
[root@big-slave2 src]# rsync -az /bigdata/kafka_2.11-1.1.0/ big-slave4:/bigdata/kafka_2.11-1.1.0/
root@big-slave4's password:
[root@big-slave2 src]# ssh big-slave3 "ln -s /bigdata/kafka_2.11-1.1.0/ /kafka"
root@big-slave3's password:
[root@big-slave2 src]# ssh big-slave4 "ln -s /bigdata/kafka_2.11-1.1.0/ /kafka"
root@big-slave4's password:

#다른 서버에도 적용
[root@big-slave2 src]# ln -s /bigdata/kafka_2.11-1.1.0 /kafka
[root@big-slave2 src]# vi /etc/profile
export KAFKA_HOME=/kafka
export PATH=$PATH:$KAFKA_HOME/bin
:wq!
[root@big-slave2 src]# source /etc/profile
#broker id 수정
[root@big-slave3 kafka_2.11-1.1.0]# vi ./config/server.properties
broker.id=2
[root@big-slave4 kafka_2.11-1.1.0]# vi ./config/server.properties
broker.id=3
#다른 서버에도 적용
[root@big-slave2 src]# vi /etc/systemd/system/kafka.service
[Unit]
Description = kafka ( ver. 2.11_1.1.0 )
After = network.target
Requires = network.target

[Service]
Type=simple
User=kafka
Group=kafka
SyslogIdentifier=kafka-server
WorkingDirectory=/kafka
Environment=JAVA_HOME=/jdk
Environment=JMX_PORT=9999
Restart=no
RestartSec=0s
ExecStart=/kafka/bin/kafka-server-start.sh /kafka/config/server.properties
ExecStop=/kafka/bin/kafka-server-stop.sh
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target
:wq!

[root@big-slave2 src]# systemctl daemon-reload
[root@big-slave2 src]# systemctl status kafka.service
● kafka.service - kafka ( ver. 2.11_1.1.0 )
   Loaded: loaded (/etc/systemd/system/kafka.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@big-slave2 src]# systemctl enable kafka.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kafka.service to /etc/systemd/system/kafka.service.
[root@big-slave2 src]# systemctl status kafka.service
● kafka.service - kafka ( ver. 2.11_1.1.0 )
   Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
[root@big-slave2 src]# systemctl start kafka.service
[root@big-slave2 src]# systemctl status kafka.service
● kafka.service - kafka ( ver. 2.11_1.1.0 )
   Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-05-06 20:25:03 KST; 43s ago
 Main PID: 27424 (java)
   CGroup: /system.slice/kafka.service
           └─27424 /jdk/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Xloggc:/kafka/bin/../logs/kafkaServer-gc.l...

May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,777] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,778] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,818] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:1000,blockEndProducerId:1999) by writing to Zk wi...oducerIdManager)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,855] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,857] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,866] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,924] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,955] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
May 06 20:25:06 big-slave2 kafka-server[27424]: [2018-05-06 20:25:06,955] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)

[zookeeper@big-slave2 ~]$ zkCli.sh
Connecting to localhost:2181
2018-05-06 21:12:26,599 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
2018-05-06 21:12:26,605 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=big-slave2
2018-05-06 21:12:26,605 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_162
2018-05-06 21:12:26,608 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-05-06 21:12:26,608 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/bigdata/jdk1.8.0_162/jre
2018-05-06 21:12:26,608 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/zookeeper/bin/../build/classes:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/zookeeper/bin/../lib/log4j-1.2.16.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.11.jar:/zookeeper/bin/../src/java/lib/*.jar:/zookeeper/bin/../conf::/jdk/lib:/bigdata/lib
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2018-05-06 21:12:26,609 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-693.21.1.el7.x86_64
2018-05-06 21:12:26,610 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=zookeeper
2018-05-06 21:12:26,610 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/zookeeper
2018-05-06 21:12:26,610 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/zookeeper
2018-05-06 21:12:26,612 [myid:] - INFO  [main:ZooKeeper@441] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
Welcome to ZooKeeper!
2018-05-06 21:12:26,651 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1035] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018-05-06 21:12:26,748 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@877] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2018-05-06 21:12:26,759 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1302] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x3000000da17001e, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[kafka-cluster, bigdata, zookeeper, yarn-leader-election, hadoop-ha]
[zk: localhost:2181(CONNECTED) 1] ls /kafka-cluster
[cluster, controller_epoch, controller, brokers, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
[zk: localhost:2181(CONNECTED) 2] ls /kafka-cluster/brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 3] ls /kafka-cluster/brokers/ids
[1, 2, 3]
[zk: localhost:2181(CONNECTED) 4]

 

반응형

'bigdata > kafka' 카테고리의 다른 글

ksql 설치  (0) 2018.05.17
producer filebeat 테스트  (0) 2018.05.12
flume kafka sink test ( 풀럼 카프카 싱크 테스트 )  (0) 2018.05.07
kafka-manager 설치  (0) 2018.05.07
Comments