我有以下结构:
zookeeper: 3.4.12 kafka: kafka_2.11-1.1.0 server1: zookeeper + kafka server2: zookeeper + kafka server3: zookeeper + kafka
通过kafka-topics shell脚本创建了具有复制因子3和分区3的主题。
./kafka-topics.sh --create --zookeeper localhost:2181 --topic test-flow --partitions 3 --replication-factor 3
并使用组localConsumers。领导没事的时候工作正常。
./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-flow Topic:test-flow PartitionCount:3 ReplicationFactor:3 Configs: Topic: test-flow Partition: 0 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1 Topic: test-flow Partition: 1 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2 Topic: test-flow Partition: 2 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
消费者日志
Received FindCoordinator response ClientResponse(receivedTimeMs=1529508772673, latencyMs=217, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, clientId=consumer-1, correlationId=0), responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=NONE, node=myserver3:9092 (id: 3 rack: null)))
但是,如果领导者倒下了-我在消费者中遇到了错误(systemctl stop kafka):
节点3不可用。好
./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-flow Topic:test-flow PartitionCount:3 ReplicationFactor:3 Configs: Topic: test-flow Partition: 0 Leader: 2 Replicas: 3,2,1 Isr: 2,1 Topic: test-flow Partition: 1 Leader: 1 Replicas: 1,3,2 Isr: 1,2 Topic: test-flow Partition: 2 Leader: 2 Replicas: 2,1,3 Isr: 2,1
Received FindCoordinator response ClientResponse(receivedTimeMs=1529507314193, latencyMs=36, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=1, clientId=consumer-1, correlationId=149), responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=COORDINATOR_NOT_AVAILABLE, node=:-1 (id: -1 rack: null))) - Group coordinator lookup failed: The coordinator is not available. - Coordinator discovery failed, refreshing metadata
使用者无法连接,直到领导者掉线或与另一个使用者组重新连接。
不明白为什么会这样?消费者应重新平衡到其他经纪人,但事实并非如此。
尝试将属性添加到server.conf中并清理zookeeper缓存。应该有帮助
offsets.topic.replication.factor=3 default.replication.factor=3
此问题的根本原因是无法在节点之间分配主题偏移量。
自动生成的主题:__consumer_offsets
您可以通过以下方式检查
$ ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic __consumer_offsets
请注意本文:https : //kafka.apache.org/documentation/#prodconfig
默认情况下,它将使用RF创建__consumer_offsets-1
重要的是在kafka /集群启动之前配置复制因子。否则,可能会像您的情况那样在重新配置实例时带来一些问题。