Kafka server does not start












0














I installed Kafka and Zookeeper on my OSX machine using Homebrew, and I'm trying to launch Zookeeper and Kafka-server following this blog post.



zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties works fine, as confirmed using telnet localhost 2181. Launching kafka-server-start /usr/local/etc/kafka/server.properties results in the following output (error at the end). What should I do to launch the Kafka server effectively?



$ kafka-server-start /usr/local/etc/kafka/server.properties
[2018-11-16 13:58:53,513] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-11-16 13:58:54,002] INFO starting (kafka.server.KafkaServer)
[2018-11-16 13:58:54,003] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-11-16 13:58:54,024] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,034] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:host.name=martinas-mbp.fritz.box (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:java.version=1.8.0_192 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_192.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/commons-lang3-3.5.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-api-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-file-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-json-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-runtime-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-transforms-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-core-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-databind-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-client-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-common-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-hk2-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-server-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-client-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-http-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-io-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-security-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-server-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-util-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-clients-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-log4j-appender-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-examples-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-tools-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0-sources.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/lz4-java-1.4.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/maven-artifact-3.5.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-library-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-logging_2.12-3.9.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-reflect-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.library.path=/Users/michelangelo/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.io.tmpdir=/var/folders/s_/_q9gnhkn0816xyzxh3sd7vdh0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.version=10.12.6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.name=michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.home=/Users/michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.dir=/bin (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,038] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6ef888f6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,055] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,055] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,069] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,078] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000041838000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,082] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,277] INFO Cluster ID = 8TON7fHXTUuVjzYM9iHZHQ (kafka.server.KafkaServer)
[2018-11-16 13:58:54,352] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,361] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,385] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,411] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:968)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2018-11-16 13:58:54,413] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2018-11-16 13:58:54,417] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,420] INFO Session: 0x10000041838000b closed (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,421] INFO EventThread shut down for session: 0x10000041838000b (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,422] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,423] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,407] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-11-16 13:58:57,408] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-11-16 13:58:57,411] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)









share|improve this question
























  • Can you share more of the output log? These lines don't contain the reason for the shutdown.
    – Edson Medina
    Nov 16 '18 at 12:57






  • 1




    I edited the question to include the entire output log.
    – albus_c
    Nov 16 '18 at 13:02






  • 1




    By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
    – albus_c
    Nov 16 '18 at 13:07
















0














I installed Kafka and Zookeeper on my OSX machine using Homebrew, and I'm trying to launch Zookeeper and Kafka-server following this blog post.



zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties works fine, as confirmed using telnet localhost 2181. Launching kafka-server-start /usr/local/etc/kafka/server.properties results in the following output (error at the end). What should I do to launch the Kafka server effectively?



$ kafka-server-start /usr/local/etc/kafka/server.properties
[2018-11-16 13:58:53,513] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-11-16 13:58:54,002] INFO starting (kafka.server.KafkaServer)
[2018-11-16 13:58:54,003] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-11-16 13:58:54,024] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,034] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:host.name=martinas-mbp.fritz.box (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:java.version=1.8.0_192 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_192.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/commons-lang3-3.5.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-api-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-file-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-json-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-runtime-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-transforms-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-core-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-databind-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-client-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-common-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-hk2-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-server-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-client-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-http-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-io-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-security-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-server-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-util-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-clients-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-log4j-appender-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-examples-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-tools-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0-sources.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/lz4-java-1.4.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/maven-artifact-3.5.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-library-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-logging_2.12-3.9.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-reflect-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.library.path=/Users/michelangelo/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.io.tmpdir=/var/folders/s_/_q9gnhkn0816xyzxh3sd7vdh0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.version=10.12.6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.name=michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.home=/Users/michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.dir=/bin (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,038] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6ef888f6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,055] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,055] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,069] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,078] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000041838000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,082] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,277] INFO Cluster ID = 8TON7fHXTUuVjzYM9iHZHQ (kafka.server.KafkaServer)
[2018-11-16 13:58:54,352] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,361] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,385] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,411] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:968)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2018-11-16 13:58:54,413] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2018-11-16 13:58:54,417] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,420] INFO Session: 0x10000041838000b closed (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,421] INFO EventThread shut down for session: 0x10000041838000b (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,422] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,423] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,407] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-11-16 13:58:57,408] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-11-16 13:58:57,411] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)









share|improve this question
























  • Can you share more of the output log? These lines don't contain the reason for the shutdown.
    – Edson Medina
    Nov 16 '18 at 12:57






  • 1




    I edited the question to include the entire output log.
    – albus_c
    Nov 16 '18 at 13:02






  • 1




    By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
    – albus_c
    Nov 16 '18 at 13:07














0












0








0







I installed Kafka and Zookeeper on my OSX machine using Homebrew, and I'm trying to launch Zookeeper and Kafka-server following this blog post.



zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties works fine, as confirmed using telnet localhost 2181. Launching kafka-server-start /usr/local/etc/kafka/server.properties results in the following output (error at the end). What should I do to launch the Kafka server effectively?



$ kafka-server-start /usr/local/etc/kafka/server.properties
[2018-11-16 13:58:53,513] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-11-16 13:58:54,002] INFO starting (kafka.server.KafkaServer)
[2018-11-16 13:58:54,003] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-11-16 13:58:54,024] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,034] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:host.name=martinas-mbp.fritz.box (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:java.version=1.8.0_192 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_192.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/commons-lang3-3.5.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-api-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-file-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-json-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-runtime-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-transforms-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-core-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-databind-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-client-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-common-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-hk2-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-server-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-client-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-http-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-io-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-security-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-server-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-util-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-clients-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-log4j-appender-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-examples-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-tools-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0-sources.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/lz4-java-1.4.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/maven-artifact-3.5.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-library-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-logging_2.12-3.9.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-reflect-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.library.path=/Users/michelangelo/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.io.tmpdir=/var/folders/s_/_q9gnhkn0816xyzxh3sd7vdh0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.version=10.12.6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.name=michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.home=/Users/michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.dir=/bin (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,038] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6ef888f6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,055] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,055] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,069] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,078] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000041838000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,082] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,277] INFO Cluster ID = 8TON7fHXTUuVjzYM9iHZHQ (kafka.server.KafkaServer)
[2018-11-16 13:58:54,352] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,361] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,385] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,411] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:968)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2018-11-16 13:58:54,413] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2018-11-16 13:58:54,417] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,420] INFO Session: 0x10000041838000b closed (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,421] INFO EventThread shut down for session: 0x10000041838000b (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,422] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,423] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,407] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-11-16 13:58:57,408] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-11-16 13:58:57,411] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)









share|improve this question















I installed Kafka and Zookeeper on my OSX machine using Homebrew, and I'm trying to launch Zookeeper and Kafka-server following this blog post.



zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties works fine, as confirmed using telnet localhost 2181. Launching kafka-server-start /usr/local/etc/kafka/server.properties results in the following output (error at the end). What should I do to launch the Kafka server effectively?



$ kafka-server-start /usr/local/etc/kafka/server.properties
[2018-11-16 13:58:53,513] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-11-16 13:58:54,002] INFO starting (kafka.server.KafkaServer)
[2018-11-16 13:58:54,003] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2018-11-16 13:58:54,024] INFO [ZooKeeperClient] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,034] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:host.name=martinas-mbp.fritz.box (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,034] INFO Client environment:java.version=1.8.0_192 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_192.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,035] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/commons-lang3-3.5.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-api-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-basic-auth-extension-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-file-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-json-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-runtime-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/connect-transforms-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-core-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-databind-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-base-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.9.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-client-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-common-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-hk2-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jersey-server-2.27.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-client-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-continuation-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-http-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-io-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-security-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-server-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlet-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-servlets-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jetty-util-9.4.11.v20180605.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-clients-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-log4j-appender-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-examples-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-streams-test-utils-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka-tools-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0-sources.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/kafka_2.12-2.0.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/lz4-java-1.4.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/maven-artifact-3.5.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-library-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-logging_2.12-3.9.0.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/scala-reflect-2.12.6.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/snappy-java-1.1.7.1.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/2.0.0/libexec/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.library.path=/Users/michelangelo/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.io.tmpdir=/var/folders/s_/_q9gnhkn0816xyzxh3sd7vdh0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:os.version=10.12.6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.name=michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.home=/Users/michelangelo (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,036] INFO Client environment:user.dir=/bin (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,038] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@6ef888f6 (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,055] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,055] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,069] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,078] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000041838000b, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,082] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,277] INFO Cluster ID = 8TON7fHXTUuVjzYM9iHZHQ (kafka.server.KafkaServer)
[2018-11-16 13:58:54,352] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,361] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.0-IV1
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/var/lib/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.0-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters =
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites =
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,384] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,385] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:54,411] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:97)
at kafka.log.LogManager$.apply(LogManager.scala:968)
at kafka.server.KafkaServer.startup(KafkaServer.scala:237)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2018-11-16 13:58:54,413] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2018-11-16 13:58:54,417] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,420] INFO Session: 0x10000041838000b closed (org.apache.zookeeper.ZooKeeper)
[2018-11-16 13:58:54,421] INFO EventThread shut down for session: 0x10000041838000b (org.apache.zookeeper.ClientCnxn)
[2018-11-16 13:58:54,422] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2018-11-16 13:58:54,423] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:55,390] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:56,393] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,398] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2018-11-16 13:58:57,407] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2018-11-16 13:58:57,408] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-11-16 13:58:57,411] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)






apache-kafka homebrew apache-zookeeper






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 13:01

























asked Nov 16 '18 at 12:46









albus_c

1,25932246




1,25932246












  • Can you share more of the output log? These lines don't contain the reason for the shutdown.
    – Edson Medina
    Nov 16 '18 at 12:57






  • 1




    I edited the question to include the entire output log.
    – albus_c
    Nov 16 '18 at 13:02






  • 1




    By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
    – albus_c
    Nov 16 '18 at 13:07


















  • Can you share more of the output log? These lines don't contain the reason for the shutdown.
    – Edson Medina
    Nov 16 '18 at 12:57






  • 1




    I edited the question to include the entire output log.
    – albus_c
    Nov 16 '18 at 13:02






  • 1




    By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
    – albus_c
    Nov 16 '18 at 13:07
















Can you share more of the output log? These lines don't contain the reason for the shutdown.
– Edson Medina
Nov 16 '18 at 12:57




Can you share more of the output log? These lines don't contain the reason for the shutdown.
– Edson Medina
Nov 16 '18 at 12:57




1




1




I edited the question to include the entire output log.
– albus_c
Nov 16 '18 at 13:02




I edited the question to include the entire output log.
– albus_c
Nov 16 '18 at 13:02




1




1




By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
– albus_c
Nov 16 '18 at 13:07




By the way, a solution I found is to follow the Homebrew services manager (see here) and use the commands brew services start kafka and similar.
– albus_c
Nov 16 '18 at 13:07












1 Answer
1






active

oldest

votes


















1














This is the issue:



org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.


There's another instance of kafka running. Kill it first.



You should be able to identify it with



lsof /usr/local/var/lib/kafka-logs/.lock


EDIT:



try brew services stop kafka first.






share|improve this answer





















  • Thanks! Are there reasons to prefer NOT using the Homebrew command?
    – albus_c
    Nov 16 '18 at 13:17






  • 1




    @albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
    – cricket_007
    Nov 16 '18 at 14:35











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53338221%2fkafka-server-does-not-start%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














This is the issue:



org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.


There's another instance of kafka running. Kill it first.



You should be able to identify it with



lsof /usr/local/var/lib/kafka-logs/.lock


EDIT:



try brew services stop kafka first.






share|improve this answer





















  • Thanks! Are there reasons to prefer NOT using the Homebrew command?
    – albus_c
    Nov 16 '18 at 13:17






  • 1




    @albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
    – cricket_007
    Nov 16 '18 at 14:35
















1














This is the issue:



org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.


There's another instance of kafka running. Kill it first.



You should be able to identify it with



lsof /usr/local/var/lib/kafka-logs/.lock


EDIT:



try brew services stop kafka first.






share|improve this answer





















  • Thanks! Are there reasons to prefer NOT using the Homebrew command?
    – albus_c
    Nov 16 '18 at 13:17






  • 1




    @albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
    – cricket_007
    Nov 16 '18 at 14:35














1












1








1






This is the issue:



org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.


There's another instance of kafka running. Kill it first.



You should be able to identify it with



lsof /usr/local/var/lib/kafka-logs/.lock


EDIT:



try brew services stop kafka first.






share|improve this answer












This is the issue:



org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /usr/local/var/lib/kafka-logs. A Kafka instance in another process or thread is using this directory.


There's another instance of kafka running. Kill it first.



You should be able to identify it with



lsof /usr/local/var/lib/kafka-logs/.lock


EDIT:



try brew services stop kafka first.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 16 '18 at 13:09









Edson Medina

6,76713141




6,76713141












  • Thanks! Are there reasons to prefer NOT using the Homebrew command?
    – albus_c
    Nov 16 '18 at 13:17






  • 1




    @albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
    – cricket_007
    Nov 16 '18 at 14:35


















  • Thanks! Are there reasons to prefer NOT using the Homebrew command?
    – albus_c
    Nov 16 '18 at 13:17






  • 1




    @albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
    – cricket_007
    Nov 16 '18 at 14:35
















Thanks! Are there reasons to prefer NOT using the Homebrew command?
– albus_c
Nov 16 '18 at 13:17




Thanks! Are there reasons to prefer NOT using the Homebrew command?
– albus_c
Nov 16 '18 at 13:17




1




1




@albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
– cricket_007
Nov 16 '18 at 14:35




@albus Not really, other than it's specific to Mac and you need to relearn how to start Kafka if using Linux. The brew command is just a wrapper around the other command.
– cricket_007
Nov 16 '18 at 14:35


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53338221%2fkafka-server-does-not-start%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How to change which sound is reproduced for terminal bell?

Can I use Tabulator js library in my java Spring + Thymeleaf project?

Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents