Created
March 30, 2018 18:46
-
-
Save josh-padnick/4b2dc841475a482ce2d601df461e1c92 to your computer and use it in GitHub Desktop.
Yet Another Terraform Failure
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kafka-standalone-ubuntu 2018/03/30 11:35:29 Error: Error applying plan: | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 2 error(s) occurred: | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 * module.zookeeper.module.zookeeper_servers.aws_launch_configuration.server_group: aws_launch_configuration.server_group: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Please include the following information in your report: | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Terraform Version: 0.11.5 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Resource ID: aws_launch_configuration.server_group | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Mismatch reason: attribute mismatch: image_id | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"root_block_device.0.iops":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"4306464cf78b10fbe8b97883b2ce5a19620d264f", New:"4306464cf78b10fbe8b97883b2ce5a19620d264f", NewComputed:false, NewRemoved:false, NewExtra:"#!/bin/bash\n# This script is meant to be run in the User Data of each ZooKeeper EC2 Instance while it's booting. The script uses the\n# run-exhibitor script to configure and start Exhibitor and ZooKeeper. Note that this script assumes it's running in\n# an AMI built from the Packer template in examples/zookeeper-ami/zookeeper.json.\n#\n# Note that many of the variables below are filled in via Terraform interpolation.\n\nset -e\n\n# Send the log output from this script to user-data.log, syslog, and the console\n# From: https://alestic.com/2010/12/ec2-user-data-output/\nexec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1\n\n# We created one ENI per server, so mount the ENI that has the same eni-0 tag as this server\necho \"Attaching ENI\"\n/usr/local/bin/attach-eni --eni-with-same-tag \"eni-0\"\n\n# Mount the EBS volume used for ZooKeeper transaction logs. Every write to ZooKeeper goes to the transaction log\n# and you get much better performance if you store this transaction log on a completely separate disk that does not\n# have to contend with any other I/O operations. The dataLogDir setting in ZooKeeper should be pointed at this volume.\n# https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html#sc_advancedConfiguration\necho \"Mounting EBS volume as device name /dev/xvdh at /opt/zookeeper/transaction-logs\"\n/usr/local/bin/mount-ebs-volume \\\n --aws-region \"eu-west-2\" \\\n --volume-with-same-tag \"ebs-volume-0\" \\\n --device-name \"/dev/xvdh\" \\\n --mount-point \"/opt/zookeeper/transaction-logs\" \\\n --owner \"zookeeper\"\n\n# Run Exhibitor and ZooKeeper\necho \"Starting Exhibitor\"\n/opt/exhibitor/run-exhibitor \\\n --shared-config-s3-bucket \"kafka-zk-standalone-jujhqb\" \\\n --shared-config-s3-key \"zoo.cfg\" \\\n --shared-config-s3-region \"eu-west-2\" \\\n --zookeeper-client-port \"2181\" \\\n --zookeeper-connect-port \"2888\" \\\n --zookeeper-election-port \"3888\" \\\n --zookeeper-transaction-log-dir \"/opt/zookeeper/transaction-logs\" \\\n --zookeeper-memory \"1024m\" \\\n --exhibitor-port \"8080\" \\\n --exhibitor-user \"zookeeper\"\n", RequiresNew:false, Sensitive:false, Type:0x0}, "enable_monitoring":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.volume_type":*terraform.ResourceAttrDiff{Old:"standard", New:"standard", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "security_groups.4077807824":*terraform.ResourceAttrDiff{Old:"sg-6259b809", New:"sg-6259b809", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.volume_size":*terraform.ResourceAttrDiff{Old:"50", New:"50", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "image_id":*terraform.ResourceAttrDiff{Old:"ami-27f31240", New:"", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "instance_type":*terraform.ResourceAttrDiff{Old:"t2.small", New:"t2.small", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.delete_on_termination":*terraform.ResourceAttrDiff{Old:"true", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "placement_tenancy":*terraform.ResourceAttrDiff{Old:"default", New:"default", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name_prefix":*terraform.ResourceAttrDiff{Old:"zk-JUJHQb-", New:"zk-JUJHQb-", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "key_name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "iam_instance_profile":*terraform.ResourceAttrDiff{Old:"server-group-20180330183110821900000006", New:"server-group-20180330183110821900000006", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"zk-JUJHQb-20180330183122156200000008", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ebs_optimized":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "associate_public_ip_address":*terraform.ResourceAttrDiff{Old:"true", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)} | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"instance_type":*terraform.ResourceAttrDiff{Old:"", New:"t2.small", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "key_name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.volume_type":*terraform.ResourceAttrDiff{Old:"", New:"standard", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "enable_monitoring":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.iops":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "placement_tenancy":*terraform.ResourceAttrDiff{Old:"", New:"default", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.delete_on_termination":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"", New:"4306464cf78b10fbe8b97883b2ce5a19620d264f", NewComputed:false, NewRemoved:false, NewExtra:"#!/bin/bash\n# This script is meant to be run in the User Data of each ZooKeeper EC2 Instance while it's booting. The script uses the\n# run-exhibitor script to configure and start Exhibitor and ZooKeeper. Note that this script assumes it's running in\n# an AMI built from the Packer template in examples/zookeeper-ami/zookeeper.json.\n#\n# Note that many of the variables below are filled in via Terraform interpolation.\n\nset -e\n\n# Send the log output from this script to user-data.log, syslog, and the console\n# From: https://alestic.com/2010/12/ec2-user-data-output/\nexec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1\n\n# We created one ENI per server, so mount the ENI that has the same eni-0 tag as this server\necho \"Attaching ENI\"\n/usr/local/bin/attach-eni --eni-with-same-tag \"eni-0\"\n\n# Mount the EBS volume used for ZooKeeper transaction logs. Every write to ZooKeeper goes to the transaction log\n# and you get much better performance if you store this transaction log on a completely separate disk that does not\n# have to contend with any other I/O operations. The dataLogDir setting in ZooKeeper should be pointed at this volume.\n# https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html#sc_advancedConfiguration\necho \"Mounting EBS volume as device name /dev/xvdh at /opt/zookeeper/transaction-logs\"\n/usr/local/bin/mount-ebs-volume \\\n --aws-region \"eu-west-2\" \\\n --volume-with-same-tag \"ebs-volume-0\" \\\n --device-name \"/dev/xvdh\" \\\n --mount-point \"/opt/zookeeper/transaction-logs\" \\\n --owner \"zookeeper\"\n\n# Run Exhibitor and ZooKeeper\necho \"Starting Exhibitor\"\n/opt/exhibitor/run-exhibitor \\\n --shared-config-s3-bucket \"kafka-zk-standalone-jujhqb\" \\\n --shared-config-s3-key \"zoo.cfg\" \\\n --shared-config-s3-region \"eu-west-2\" \\\n --zookeeper-client-port \"2181\" \\\n --zookeeper-connect-port \"2888\" \\\n --zookeeper-election-port \"3888\" \\\n --zookeeper-transaction-log-dir \"/opt/zookeeper/transaction-logs\" \\\n --zookeeper-memory \"1024m\" \\\n --exhibitor-port \"8080\" \\\n --exhibitor-user \"zookeeper\"\n", RequiresNew:true, Sensitive:false, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "iam_instance_profile":*terraform.ResourceAttrDiff{Old:"", New:"server-group-20180330183110821900000006", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "name_prefix":*terraform.ResourceAttrDiff{Old:"", New:"zk-JUJHQb-", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "associate_public_ip_address":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "ebs_optimized":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.volume_size":*terraform.ResourceAttrDiff{Old:"", New:"50", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "security_groups.4077807824":*terraform.ResourceAttrDiff{Old:"", New:"sg-6259b809", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)} | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Also include as much context as you can about your config, state, and the steps you performed to trigger this error. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 * module.kafka_brokers.module.kafka_brokers.aws_launch_configuration.server_group: aws_launch_configuration.server_group: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Please include the following information in your report: | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Terraform Version: 0.11.5 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Resource ID: aws_launch_configuration.server_group | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Mismatch reason: attribute mismatch: image_id | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"key_name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ebs_optimized":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "placement_tenancy":*terraform.ResourceAttrDiff{Old:"default", New:"default", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "iam_instance_profile":*terraform.ResourceAttrDiff{Old:"server-group-20180330183110398600000005", New:"server-group-20180330183110398600000005", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "enable_monitoring":*terraform.ResourceAttrDiff{Old:"false", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.volume_type":*terraform.ResourceAttrDiff{Old:"gp2", New:"gp2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name_prefix":*terraform.ResourceAttrDiff{Old:"kafka-JUJHQb-", New:"kafka-JUJHQb-", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "security_groups.1802437165":*terraform.ResourceAttrDiff{Old:"sg-ff57b694", New:"sg-ff57b694", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "instance_type":*terraform.ResourceAttrDiff{Old:"t2.small", New:"t2.small", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "associate_public_ip_address":*terraform.ResourceAttrDiff{Old:"true", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.iops":*terraform.ResourceAttrDiff{Old:"0", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.volume_size":*terraform.ResourceAttrDiff{Old:"50", New:"50", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"kafka-JUJHQb-20180330183120089600000007", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "root_block_device.0.delete_on_termination":*terraform.ResourceAttrDiff{Old:"true", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "image_id":*terraform.ResourceAttrDiff{Old:"ami-a6f514c1", New:"", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"1", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"866b27b45ef85912bc132315590e205fa0176fb1", New:"c85762c10b844efaf90edf7c042fb1db1883b318", NewComputed:false, NewRemoved:false, NewExtra:"#!/bin/bash\n# This script is meant to be run in the User Data of each Kafka broker EC2 Instance while it's booting. The script uses\n# the run-kafka script to configure and start Kafka. Note that this script assumes it's running in an AMI built from\n# the Packer template in examples/kafka-ami/kafka.json.\n#\n# Note that many of the variables below are filled in via Terraform interpolation.\n\nset -e\n\n# Send the log output from this script to user-data.log, syslog, and the console\n# From: https://alestic.com/2010/12/ec2-user-data-output/\nexec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1\n\n# Mount the EBS volume used for Kafka logs. Every write to Kafka is written to a log file and you get\n# better performance if you store these log files on a completely separate disk that does not have to\n# contend with any other I/O operations. The log.dirs setting in Kafka should be pointed at this volume.\n# http://docs.confluent.io/current/kafka/deployment.html#disks\necho \"Mounting EBS volume as device name /dev/xvdh at /opt/kafka/kafka-logs\"\n/usr/local/bin/mount-ebs-volume \\\n --aws-region \"eu-west-2\" \\\n --volume-with-same-tag \"ebs-volume-0\" \\\n --device-name \"/dev/xvdh\" \\\n --mount-point \"/opt/kafka/kafka-logs\" \\\n --owner \"kafka\"\n\n# Run Kafka. A couple notes on the command below:\n#\n# - We add a /data to --log-dirs because Kafka doesn't like the lost+found folder in its logs folder.\n#\n# - We use the public-ip to make it easy to test this example. In real-world usage, you should use private-ip.\n#\n# - We enable SSL to show an example of how to encrypt data in transit with Kafka. If you want encryption for data at\n# rest too, you should also enable encryption for Kafka's EBS Volume in main.tf.\n#\n# - To keep this example simple, the key_store_password, trust_store_password, and ssl_cert_password are all passed in\n# as plain text. In real-world usage, they should all be encrypted with KMS and you should decrypt them in this User\n# Data script just before running Kafka.\necho \"Starting Kafka\"\n/opt/kafka/bin/run-kafka \\\n --config-path \"/opt/kafka/config/config/dev.server-4.0.x.properties\" \\\n --log4j-config-path \"/opt/kafka/config/log4j/dev.log4j.properties\" \\\n --zookeeper-eni-tag ServerGroupName=\"zk-JUJHQb\" \\\n --log-dirs \"/opt/kafka/kafka-logs/data\" \\\n --kafka-user \"kafka\" \\\n --memory \"1024m\" \\\n --advertised-external-kafka-port \"9092\" \\\n --advertised-internal-kafka-port \"9093\" \\\n --enable-ssl \"true\" \\\n --key-store-path \"/opt/kafka/ssl/kafka/keystore/dev.keystore.jks\" \\\n --key-store-password \"password\" \\\n --trust-store-path \"/opt/kafka/ssl/kafka/truststore/dev.truststore.jks\" \\\n --trust-store-password \"password\" \\\n --log-retention-hours \"168\" \\\n --num-partitions \"1\" \\\n --replication-factor \"1\" \\\n --offsets-replication-factor \"1\" \\\n --transaction-state-replication-factor \"1\" \\\n --min-in-sync-replicas \"1\" \\\n --unclean-leader-election \"true\"\n\n# Run a sophisticated health checker for Kafka (https://github.com/andreas-schroeder/kafka-health-check). Previously,\n# we simply checked a TCP port listener for the health check, but this led to Kafka reporting itself healthy prior to\n# the cluster actually being healthy, which led to confusing behavior during bootup.\necho \"Starting kafka-health-check\"\n/opt/kafka-health-check/bin/run-kafka-health-check \\\n --zookeeper-eni-tag ServerGroupName=\"zk-JUJHQb\" \\\n --broker-port 9094\n\n# This is only used for automated testing to have an easy way to force a rolling deployment. Don't copy it into\n# your real-world apps.\necho \"redeploy\"", RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)} | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"root_block_device.0.volume_type":*terraform.ResourceAttrDiff{Old:"", New:"gp2", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "placement_tenancy":*terraform.ResourceAttrDiff{Old:"", New:"default", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "iam_instance_profile":*terraform.ResourceAttrDiff{Old:"", New:"server-group-20180330183110398600000005", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "key_name":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.iops":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "enable_monitoring":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.delete_on_termination":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "associate_public_ip_address":*terraform.ResourceAttrDiff{Old:"", New:"true", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.0.volume_size":*terraform.ResourceAttrDiff{Old:"", New:"50", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "user_data":*terraform.ResourceAttrDiff{Old:"", New:"c85762c10b844efaf90edf7c042fb1db1883b318", NewComputed:false, NewRemoved:false, NewExtra:"#!/bin/bash\n# This script is meant to be run in the User Data of each Kafka broker EC2 Instance while it's booting. The script uses\n# the run-kafka script to configure and start Kafka. Note that this script assumes it's running in an AMI built from\n# the Packer template in examples/kafka-ami/kafka.json.\n#\n# Note that many of the variables below are filled in via Terraform interpolation.\n\nset -e\n\n# Send the log output from this script to user-data.log, syslog, and the console\n# From: https://alestic.com/2010/12/ec2-user-data-output/\nexec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1\n\n# Mount the EBS volume used for Kafka logs. Every write to Kafka is written to a log file and you get\n# better performance if you store these log files on a completely separate disk that does not have to\n# contend with any other I/O operations. The log.dirs setting in Kafka should be pointed at this volume.\n# http://docs.confluent.io/current/kafka/deployment.html#disks\necho \"Mounting EBS volume as device name /dev/xvdh at /opt/kafka/kafka-logs\"\n/usr/local/bin/mount-ebs-volume \\\n --aws-region \"eu-west-2\" \\\n --volume-with-same-tag \"ebs-volume-0\" \\\n --device-name \"/dev/xvdh\" \\\n --mount-point \"/opt/kafka/kafka-logs\" \\\n --owner \"kafka\"\n\n# Run Kafka. A couple notes on the command below:\n#\n# - We add a /data to --log-dirs because Kafka doesn't like the lost+found folder in its logs folder.\n#\n# - We use the public-ip to make it easy to test this example. In real-world usage, you should use private-ip.\n#\n# - We enable SSL to show an example of how to encrypt data in transit with Kafka. If you want encryption for data at\n# rest too, you should also enable encryption for Kafka's EBS Volume in main.tf.\n#\n# - To keep this example simple, the key_store_password, trust_store_password, and ssl_cert_password are all passed in\n# as plain text. In real-world usage, they should all be encrypted with KMS and you should decrypt them in this User\n# Data script just before running Kafka.\necho \"Starting Kafka\"\n/opt/kafka/bin/run-kafka \\\n --config-path \"/opt/kafka/config/config/dev.server-4.0.x.properties\" \\\n --log4j-config-path \"/opt/kafka/config/log4j/dev.log4j.properties\" \\\n --zookeeper-eni-tag ServerGroupName=\"zk-JUJHQb\" \\\n --log-dirs \"/opt/kafka/kafka-logs/data\" \\\n --kafka-user \"kafka\" \\\n --memory \"1024m\" \\\n --advertised-external-kafka-port \"9092\" \\\n --advertised-internal-kafka-port \"9093\" \\\n --enable-ssl \"true\" \\\n --key-store-path \"/opt/kafka/ssl/kafka/keystore/dev.keystore.jks\" \\\n --key-store-password \"password\" \\\n --trust-store-path \"/opt/kafka/ssl/kafka/truststore/dev.truststore.jks\" \\\n --trust-store-password \"password\" \\\n --log-retention-hours \"168\" \\\n --num-partitions \"1\" \\\n --replication-factor \"1\" \\\n --offsets-replication-factor \"1\" \\\n --transaction-state-replication-factor \"1\" \\\n --min-in-sync-replicas \"1\" \\\n --unclean-leader-election \"true\"\n\n# Run a sophisticated health checker for Kafka (https://github.com/andreas-schroeder/kafka-health-check). Previously,\n# we simply checked a TCP port listener for the health check, but this led to Kafka reporting itself healthy prior to\n# the cluster actually being healthy, which led to confusing behavior during bootup.\necho \"Starting kafka-health-check\"\n/opt/kafka-health-check/bin/run-kafka-health-check \\\n --zookeeper-eni-tag ServerGroupName=\"zk-JUJHQb\" \\\n --broker-port 9094\n\n# This is only used for automated testing to have an easy way to force a rolling deployment. Don't copy it into\n# your real-world apps.\necho \"redeploy\"", RequiresNew:true, Sensitive:false, Type:0x0}, "security_groups.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "instance_type":*terraform.ResourceAttrDiff{Old:"", New:"t2.small", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "security_groups.1802437165":*terraform.ResourceAttrDiff{Old:"", New:"sg-ff57b694", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "ebs_optimized":*terraform.ResourceAttrDiff{Old:"", New:"false", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}, "root_block_device.#":*terraform.ResourceAttrDiff{Old:"", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "ebs_block_device.#":*terraform.ResourceAttrDiff{Old:"", New:"", NewComputed:true, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "name_prefix":*terraform.ResourceAttrDiff{Old:"", New:"kafka-JUJHQb-", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:true, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)} | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Also include as much context as you can about your config, state, and the steps you performed to trigger this error. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Terraform does not automatically rollback in the face of errors. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Instead, your Terraform state file has been partially updated with | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 any resources that successfully completed. Please address the error | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 above and apply again to incrementally change your infrastructure. | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Terraform apply failed with error: exit status 1 | |
kafka-standalone-ubuntu 2018/03/30 11:35:29 Terraform apply failed with the error 'diffs didn't match during apply'. This usually indicates a minor Terraform timing bug (https://github.com/hashicorp/terraform/issues/5200) that goes away when you reapply. Retrying terraform apply. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment