DBILITY

독거 가능성 100% 노후에 라면값이라도 하게 센스를 발휘합시다!😅
Please click on the ad so that I can pay for ramen in my old age!
点击一下广告,让老后吃个泡面钱吧!
老後にラーメン代だけでもするように広告を一回クリックしてください。

filebeat rpm 설치 본문

bigdata/elastic

filebeat rpm 설치

DBILITY 2018. 5. 12. 17:32
반응형

filebeat kafka out을 테스트해 보았다.

elasticsearch로 보내기 위해 nifi로 dataflow를 구성,elasticseach에 저장하고 검색할 수 있다.

kibana에서 dashboard를 구성해 봐야한다.

[kafka@big-slave4 ~]$ kafka-topics.sh \
> --zookeeper big-master:2181,big-slave1:2181,big-slave2:2181/kafka-cluster \
> --topic kafka-log --partitions 3 --replication-factor 2 --create
Created topic "kafka-log".
[kafka@big-slave4 ~]$ exit
logout
[root@big-slave4 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
[root@big-slave4 ~]# cd /etc/yum.repos.d
[root@big-slave4 yum.repos.d]# vi elastic.repo
[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
:wq!
[root@big-slave4 yum.repos.d]# yum -y install filebeat
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.kakao.com
 * epel: www.ftp.ne.jp
 * extras: mirror.kakao.com
 * updates: mirror.kakao.com
Resolving Dependencies
--> Running transaction check
---> Package filebeat.x86_64 0:6.2.4-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================
 Package                 Arch                  Version                   Repository                  Size
==========================================================================================================
Installing:
 filebeat                x86_64                6.2.4-1                   elastic-6.x                 12 M

Transaction Summary
==========================================================================================================
Install  1 Package

Total download size: 12 M
Installed size: 49 M
Downloading packages:
filebeat-6.2.4-x86_64.rpm                                                          |  12 MB  00:00:10
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : filebeat-6.2.4-1.x86_64                                                                1/1
  Verifying  : filebeat-6.2.4-1.x86_64                                                                1/1

Installed:
  filebeat.x86_64 0:6.2.4-1

Complete!
[root@big-slave4 yum.repos.d]# vi /etc/filebeat/filebeat.yml
kafka.home: /kafka
#=========================== Filebeat prospectors =============================

filebeat.prospectors:

- type: log

  enabled: true

  paths:
    - ${kafka.home}/logs/server.log*

  ### Multiline options

  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after
  fields.pipeline: kafka-log

#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ Outputs =====================================

#------------------------------ Kafka output ----------------------------------
output.kafka:
 hosts: ["big-slave2:9092","big-slave3:9092","big-slave4:9092"]
 topic: 'kafka-log'
 partition.round_robin:
   reachable_only : false
 required_acks: 1
 compression: gzip
 max_message_bytes: 1000000
:wq!

[root@big-slave4 yum.repos.d]# systemctl start filebeat.service
[root@big-slave4 yum.repos.d]# systemctl status filebeat.service
● filebeat.service - filebeat
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2018-05-12 17:53:48 KST; 7s ago
     Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
 Main PID: 2548 (filebeat)
   CGroup: /system.slice/filebeat.service
           └─2548 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

May 12 17:53:48 big-slave4 systemd[1]: Started filebeat.
May 12 17:53:48 big-slave4 systemd[1]: Starting filebeat...

[root@big-slave4 yum.repos.d]# su - kafka
Last login: Sat May 12 17:41:55 KST 2018 on pts/0
[kafka@big-slave4 ~]$ kafka-console-consumer.sh --bootstrap-server big-slave2:9092,big-slave3:9092,big-slave4:9092 --topic kafka-log --group kafka-log-group-consumers --from-beginning --max-messages 10
{"@timestamp":"2018-05-12T08:52:21.675Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"source":"/kafka/logs/server.log.2018-05-12-13","offset":314,"message":"[2018-05-12 13:18:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"}}
{"@timestamp":"2018-05-12T08:52:21.675Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"message":"[2018-05-12 13:48:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","source":"/kafka/logs/server.log.2018-05-12-13","offset":785,"prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"}}
{"@timestamp":"2018-05-12T08:52:21.675Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"fields":{"pipeline":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"},"source":"/kafka/logs/server.log.2018-05-12-14","offset":471,"message":"[2018-05-12 14:28:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","prospector":{"type":"log"}}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"offset":157,"message":"[2018-05-12 16:08:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"version":"6.2.4","name":"big-slave4","hostname":"big-slave4"},"source":"/kafka/logs/server.log.2018-05-12-16"}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"source":"/kafka/logs/server.log.2018-05-12-16","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"hostname":"big-slave4","version":"6.2.4","name":"big-slave4"},"offset":628,"message":"[2018-05-12 16:38:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)"}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"},"offset":314,"message":"[2018-05-12 17:18:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","source":"/kafka/logs/server.log","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"}}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"fields":{"pipeline":"kafka-log"},"beat":{"hostname":"big-slave4","version":"6.2.4","name":"big-slave4"},"offset":471,"message":"[2018-05-12 01:28:39,175] INFO [GroupMetadataManager brokerId=3] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)","source":"/kafka/logs/server.log.2018-05-12-01","prospector":{"type":"log"}}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"source":"/kafka/logs/server.log","offset":461,"message":"[2018-05-12 17:18:46,701] INFO [ReplicaFetcherManager on broker 3] Removed fetcher for partitions kafka-log-1 (kafka.server.ReplicaFetcherManager)","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"}}
{"@timestamp":"2018-05-12T08:52:21.676Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"source":"/kafka/logs/server.log","offset":1623,"message":"[2018-05-12 17:18:46,712] INFO Created log for partition kafka-log-1 in /kafka/data with properties {compression.type -\u003e producer, message.format.version -\u003e 1.1-IV0, file.delete.delay.ms -\u003e 60000, max.message.bytes -\u003e 1000012, min.compaction.lag.ms -\u003e 0, message.timestamp.type -\u003e CreateTime, min.insync.replicas -\u003e 1, segment.jitter.ms -\u003e 0, preallocate -\u003e false, min.cleanable.dirty.ratio -\u003e 0.5, index.interval.bytes -\u003e 4096, unclean.leader.election.enable -\u003e false, retention.bytes -\u003e -1, delete.retention.ms -\u003e 86400000, cleanup.policy -\u003e [delete], flush.ms -\u003e 9223372036854775807, segment.ms -\u003e 604800000, segment.bytes -\u003e 1073741824, retention.ms -\u003e 604800000, message.timestamp.difference.max.ms -\u003e 9223372036854775807, segment.index.bytes -\u003e 10485760, flush.messages -\u003e 9223372036854775807}. (kafka.log.LogManager)","prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"name":"big-slave4","hostname":"big-slave4","version":"6.2.4"}}
{"@timestamp":"2018-05-12T08:52:21.677Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.4","topic":"kafka-log"},"prospector":{"type":"log"},"fields":{"pipeline":"kafka-log"},"beat":{"version":"6.2.4","name":"big-slave4","hostname":"big-slave4"},"source":"/kafka/logs/server.log","offset":1904,"message":"[2018-05-12 17:18:46,713] INFO Replica loaded for partition kafka-log-1 with initial high watermark 0 (kafka.cluster.Replica)"}
Processed a total of 10 messages

 

반응형

'bigdata > elastic' 카테고리의 다른 글

elasticsearch, kibana rpm 설치  (0) 2018.05.13
filebeat설치  (0) 2018.05.11
cloumon-elk 설치하기  (0) 2018.05.06
Comments