Created
October 26, 2015 19:22
-
-
Save Aslan/875ef9ed0c0cca24dda7 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
15/10/26 19:19:38 INFO client.RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8032 | |
Container: container_1444274555723_0062_02_000003 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:9332 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/114/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:17:40 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:17:41 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:41 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:41 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:42 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:42 INFO Remoting: Starting remoting | |
15/10/26 19:17:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:42150] | |
15/10/26 19:17:42 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 42150. | |
15/10/26 19:17:43 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:43 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:43 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:43 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:43 INFO Remoting: Starting remoting | |
15/10/26 19:17:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:17:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:58140] | |
15/10/26 19:17:43 INFO util.Utils: Successfully started service 'sparkExecutor' on port 58140. | |
15/10/26 19:17:43 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-2c253803-2465-438a-9492-9aaeefda4990 | |
15/10/26 19:17:43 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-09b11d18-5a00-4d7b-81e3-8d5114c39ca0 | |
15/10/26 19:17:43 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:17:43 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler | |
15/10/26 19:17:43 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/26 19:17:43 INFO executor.Executor: Starting executor ID 2 on host ip-10-169-170-124.ec2.internal | |
15/10/26 19:17:44 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 50135. | |
15/10/26 19:17:44 INFO netty.NettyBlockTransferService: Server created on 50135 | |
15/10/26 19:17:44 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:17:44 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/26 19:17:44 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/26 19:18:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 0 | |
15/10/26 19:18:08 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0) | |
15/10/26 19:18:08 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0 | |
15/10/26 19:18:08 INFO storage.MemoryStore: ensureFreeSpace(47141) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:08 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 535.0 MB) | |
15/10/26 19:18:08 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 232 ms | |
15/10/26 19:18:08 INFO storage.MemoryStore: ensureFreeSpace(135712) called with curMem=47141, maxMem=560993402 | |
15/10/26 19:18:08 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.8 MB) | |
15/10/26 19:18:08 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:18:08 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:18:10 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:18:10 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-031cded7:CoarseGrainedExecutorBackend:07624 period:60 /mnt/var/em/raw/i-031cded7_20151026_CoarseGrainedExecutorBackend_07624_raw.bin | |
15/10/26 19:18:10 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261918_0000_m_000000_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.original/_temporary/0/task_201510261918_0000_m_000000 | |
15/10/26 19:18:10 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261918_0000_m_000000_0: Committed | |
15/10/26 19:18:10 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 1885 bytes result sent to driver | |
15/10/26 19:18:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 3 | |
15/10/26 19:18:10 INFO executor.Executor: Running task 1.0 in stage 1.0 (TID 3) | |
15/10/26 19:18:10 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1 | |
15/10/26 19:18:10 INFO storage.MemoryStore: ensureFreeSpace(2070) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:10 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 535.0 MB) | |
15/10/26 19:18:10 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 14 ms | |
15/10/26 19:18:10 INFO storage.MemoryStore: ensureFreeSpace(3776) called with curMem=2070, maxMem=560993402 | |
15/10/26 19:18:10 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 535.0 MB) | |
15/10/26 19:18:12 INFO executor.Executor: Finished task 1.0 in stage 1.0 (TID 3). 6395 bytes result sent to driver | |
15/10/26 19:18:14 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 4 | |
15/10/26 19:18:14 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 4) | |
15/10/26 19:18:14 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 3 | |
15/10/26 19:18:14 INFO storage.MemoryStore: ensureFreeSpace(29341) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:14 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 535.0 MB) | |
15/10/26 19:18:14 INFO broadcast.TorrentBroadcast: Reading broadcast variable 3 took 11 ms | |
15/10/26 19:18:14 INFO storage.MemoryStore: ensureFreeSpace(82904) called with curMem=29341, maxMem=560993402 | |
15/10/26 19:18:14 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.9 MB) | |
15/10/26 19:18:14 INFO datasources.DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:18:14 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library | |
15/10/26 19:18:14 INFO compress.CodecPool: Got brand-new compressor [.gz] | |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
15/10/26 19:18:15 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261918_0002_m_000000_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet/_temporary/0/task_201510261918_0002_m_000000 | |
15/10/26 19:18:15 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261918_0002_m_000000_0: Committed | |
15/10/26 19:18:15 INFO executor.Executor: Finished task 0.0 in stage 2.0 (TID 4). 936 bytes result sent to driver | |
15/10/26 19:18:18 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/26 19:18:18 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/26 19:18:18 INFO storage.BlockManager: BlockManager stopped | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:18:18 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:4008 | |
Log Contents: | |
2015-10-26T19:17:42.217+0000: [GC2015-10-26T19:17:42.217+0000: [ParNew: 272640K->17715K(306688K), 0.0275140 secs] 272640K->17715K(1014528K), 0.0276360 secs] [Times: user=0.05 sys=0.01, real=0.02 secs] | |
2015-10-26T19:17:42.245+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 31456K(1014528K), 0.0055440 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
2015-10-26T19:17:42.284+0000: [CMS-concurrent-mark: 0.031/0.033 secs] [Times: user=0.07 sys=0.01, real=0.03 secs] | |
2015-10-26T19:17:42.286+0000: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-26T19:17:43.566+0000: [CMS-concurrent-abortable-preclean: 1.026/1.280 secs] [Times: user=2.57 sys=0.48, real=1.28 secs] | |
2015-10-26T19:17:43.572+0000: [GC[YG occupancy: 161143 K (306688 K)]2015-10-26T19:17:43.572+0000: [Rescan (parallel) , 0.0226460 secs]2015-10-26T19:17:43.594+0000: [weak refs processing, 0.0000390 secs]2015-10-26T19:17:43.594+0000: [class unloading, 0.0026140 secs]2015-10-26T19:17:43.597+0000: [scrub symbol table, 0.0039820 secs]2015-10-26T19:17:43.601+0000: [scrub string table, 0.0003730 secs] [1 CMS-remark: 0K(707840K)] 161143K(1014528K), 0.0300440 secs] [Times: user=0.11 sys=0.00, real=0.03 secs] | |
2015-10-26T19:17:43.608+0000: [CMS-concurrent-sweep: 0.006/0.006 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] | |
2015-10-26T19:17:43.640+0000: [CMS-concurrent-reset: 0.032/0.032 secs] [Times: user=0.09 sys=0.03, real=0.04 secs] | |
2015-10-26T19:18:08.694+0000: [GC2015-10-26T19:18:08.694+0000: [ParNew: 278914K->30457K(306688K), 0.0674610 secs] 278914K->36346K(1014528K), 0.0675320 secs] [Times: user=0.14 sys=0.06, real=0.07 secs] | |
2015-10-26T19:18:09.638+0000: [GC [1 CMS-initial-mark: 5888K(707840K)] 123263K(1014528K), 0.0198840 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] | |
2015-10-26T19:18:09.706+0000: [CMS-concurrent-mark: 0.043/0.048 secs] [Times: user=0.09 sys=0.00, real=0.04 secs] | |
2015-10-26T19:18:09.732+0000: [CMS-concurrent-preclean: 0.020/0.026 secs] [Times: user=0.04 sys=0.01, real=0.03 secs] | |
2015-10-26T19:18:14.094+0000: [GC2015-10-26T19:18:14.094+0000: [ParNew: 303097K->34047K(306688K), 0.0487820 secs] 308986K->63486K(1014528K), 0.0488510 secs] [Times: user=0.11 sys=0.04, real=0.05 secs] | |
CMS: abort preclean due to time 2015-10-26T19:18:14.758+0000: [CMS-concurrent-abortable-preclean: 2.877/5.026 secs] [Times: user=6.57 sys=0.93, real=5.03 secs] | |
2015-10-26T19:18:14.758+0000: [GC[YG occupancy: 71363 K (306688 K)]2015-10-26T19:18:14.758+0000: [Rescan (parallel) , 0.0055720 secs]2015-10-26T19:18:14.764+0000: [weak refs processing, 0.0000620 secs]2015-10-26T19:18:14.764+0000: [class unloading, 0.0094460 secs]2015-10-26T19:18:14.774+0000: [scrub symbol table, 0.0063210 secs]2015-10-26T19:18:14.780+0000: [scrub string table, 0.0004820 secs] [1 CMS-remark: 29438K(707840K)] 100802K(1014528K), 0.0223500 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] | |
2015-10-26T19:18:14.796+0000: [CMS-concurrent-sweep: 0.014/0.015 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] | |
2015-10-26T19:18:14.799+0000: [CMS-concurrent-reset: 0.003/0.003 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: GZIP | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,568 | |
Container: container_1444274555723_0062_01_000002 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:9333 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/114/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:16:56 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:16:57 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:16:57 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:16:57 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:16:57 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:16:57 INFO Remoting: Starting remoting | |
15/10/26 19:16:58 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:59654] | |
15/10/26 19:16:58 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 59654. | |
15/10/26 19:16:58 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:16:58 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:16:58 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:16:58 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:16:58 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:16:58 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:16:58 INFO Remoting: Starting remoting | |
15/10/26 19:16:58 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:16:58 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:49483] | |
15/10/26 19:16:58 INFO util.Utils: Successfully started service 'sparkExecutor' on port 49483. | |
15/10/26 19:16:59 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-7949cd80-8059-42cb-8422-4f41af2afc42 | |
15/10/26 19:16:59 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-0b68d77f-51e1-4b58-b226-fb16ea4ff202 | |
15/10/26 19:16:59 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:16:59 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler | |
15/10/26 19:16:59 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/26 19:16:59 INFO executor.Executor: Starting executor ID 1 on host ip-10-169-170-124.ec2.internal | |
15/10/26 19:16:59 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55017. | |
15/10/26 19:16:59 INFO netty.NettyBlockTransferService: Server created on 55017 | |
15/10/26 19:16:59 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:16:59 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/26 19:16:59 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/26 19:17:20 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1 | |
15/10/26 19:17:20 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1) | |
15/10/26 19:17:21 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0 | |
15/10/26 19:17:21 INFO storage.MemoryStore: ensureFreeSpace(47141) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:21 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 535.0 MB) | |
15/10/26 19:17:21 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 255 ms | |
15/10/26 19:17:21 INFO storage.MemoryStore: ensureFreeSpace(135712) called with curMem=47141, maxMem=560993402 | |
15/10/26 19:17:21 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.8 MB) | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:17:22 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:17:22 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-031cded7:CoarseGrainedExecutorBackend:07399 period:60 /mnt/var/em/raw/i-031cded7_20151026_CoarseGrainedExecutorBackend_07399_raw.bin | |
15/10/26 19:17:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261917_0000_m_000001_1' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.original/_temporary/0/task_201510261917_0000_m_000001 | |
15/10/26 19:17:22 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261917_0000_m_000001_1: Committed | |
15/10/26 19:17:22 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 1885 bytes result sent to driver | |
15/10/26 19:17:23 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 2 | |
15/10/26 19:17:23 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 2) | |
15/10/26 19:17:23 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1 | |
15/10/26 19:17:23 INFO storage.MemoryStore: ensureFreeSpace(2070) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:23 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 535.0 MB) | |
15/10/26 19:17:23 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 39 ms | |
15/10/26 19:17:23 INFO storage.MemoryStore: ensureFreeSpace(3776) called with curMem=2070, maxMem=560993402 | |
15/10/26 19:17:23 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 535.0 MB) | |
15/10/26 19:17:23 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 2). 1616 bytes result sent to driver | |
15/10/26 19:17:26 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 4 | |
15/10/26 19:17:26 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 4) | |
15/10/26 19:17:26 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 3 | |
15/10/26 19:17:26 INFO storage.MemoryStore: ensureFreeSpace(29338) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:26 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 535.0 MB) | |
15/10/26 19:17:26 INFO broadcast.TorrentBroadcast: Reading broadcast variable 3 took 14 ms | |
15/10/26 19:17:26 INFO storage.MemoryStore: ensureFreeSpace(82904) called with curMem=29338, maxMem=560993402 | |
15/10/26 19:17:26 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.9 MB) | |
15/10/26 19:17:27 INFO datasources.DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:17:28 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library | |
15/10/26 19:17:28 INFO compress.CodecPool: Got brand-new compressor [.gz] | |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
15/10/26 19:17:28 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261917_0002_m_000000_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet/_temporary/0/task_201510261917_0002_m_000000 | |
15/10/26 19:17:28 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261917_0002_m_000000_0: Committed | |
15/10/26 19:17:28 INFO executor.Executor: Finished task 0.0 in stage 2.0 (TID 4). 936 bytes result sent to driver | |
15/10/26 19:17:30 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/26 19:17:30 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/26 19:17:30 INFO storage.BlockManager: BlockManager stopped | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:17:30 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:4008 | |
Log Contents: | |
2015-10-26T19:16:58.029+0000: [GC2015-10-26T19:16:58.029+0000: [ParNew: 272640K->17621K(306688K), 0.0285660 secs] 272640K->17621K(1014528K), 0.0286860 secs] [Times: user=0.05 sys=0.02, real=0.03 secs] | |
2015-10-26T19:16:58.058+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 17621K(1014528K), 0.0055390 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
2015-10-26T19:16:58.093+0000: [CMS-concurrent-mark: 0.028/0.029 secs] [Times: user=0.05 sys=0.02, real=0.03 secs] | |
2015-10-26T19:16:58.094+0000: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-26T19:16:59.366+0000: [CMS-concurrent-abortable-preclean: 0.945/1.272 secs] [Times: user=2.57 sys=0.32, real=1.27 secs] | |
2015-10-26T19:16:59.367+0000: [GC[YG occupancy: 159364 K (306688 K)]2015-10-26T19:16:59.367+0000: [Rescan (parallel) , 0.0167680 secs]2015-10-26T19:16:59.384+0000: [weak refs processing, 0.0000430 secs]2015-10-26T19:16:59.384+0000: [class unloading, 0.0027630 secs]2015-10-26T19:16:59.387+0000: [scrub symbol table, 0.0043100 secs]2015-10-26T19:16:59.391+0000: [scrub string table, 0.0003740 secs] [1 CMS-remark: 0K(707840K)] 159364K(1014528K), 0.0246720 secs] [Times: user=0.06 sys=0.00, real=0.03 secs] | |
2015-10-26T19:16:59.399+0000: [CMS-concurrent-sweep: 0.006/0.006 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] | |
2015-10-26T19:16:59.431+0000: [CMS-concurrent-reset: 0.032/0.032 secs] [Times: user=0.07 sys=0.02, real=0.03 secs] | |
2015-10-26T19:17:21.092+0000: [GC2015-10-26T19:17:21.092+0000: [ParNew: 276011K->31754K(306688K), 0.0879770 secs] 276011K->37638K(1014528K), 0.0880540 secs] [Times: user=0.12 sys=0.05, real=0.09 secs] | |
2015-10-26T19:17:22.055+0000: [GC [1 CMS-initial-mark: 5884K(707840K)] 124905K(1014528K), 0.0215850 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] | |
2015-10-26T19:17:22.139+0000: [CMS-concurrent-mark: 0.053/0.062 secs] [Times: user=0.09 sys=0.03, real=0.06 secs] | |
2015-10-26T19:17:22.167+0000: [CMS-concurrent-preclean: 0.021/0.028 secs] [Times: user=0.04 sys=0.01, real=0.02 secs] | |
CMS: abort preclean due to time 2015-10-26T19:17:27.196+0000: [CMS-concurrent-abortable-preclean: 2.240/5.029 secs] [Times: user=4.20 sys=0.61, real=5.03 secs] | |
2015-10-26T19:17:27.196+0000: [GC[YG occupancy: 273814 K (306688 K)]2015-10-26T19:17:27.196+0000: [Rescan (parallel) , 0.0559120 secs]2015-10-26T19:17:27.252+0000: [weak refs processing, 0.0000490 secs]2015-10-26T19:17:27.252+0000: [class unloading, 0.0065000 secs]2015-10-26T19:17:27.259+0000: [scrub symbol table, 0.0060420 secs]2015-10-26T19:17:27.265+0000: [scrub string table, 0.0004680 secs] [1 CMS-remark: 5884K(707840K)] 279698K(1014528K), 0.0693660 secs] [Times: user=0.23 sys=0.01, real=0.07 secs] | |
2015-10-26T19:17:27.278+0000: [CMS-concurrent-sweep: 0.012/0.012 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] | |
2015-10-26T19:17:27.280+0000: [CMS-concurrent-reset: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-26T19:17:27.590+0000: [GC2015-10-26T19:17:27.590+0000: [ParNew: 304394K->34048K(306688K), 0.1708150 secs] 310246K->79820K(1014528K), 0.1709030 secs] [Times: user=0.56 sys=0.07, real=0.17 secs] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: GZIP | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
Oct 26, 2015 7:17:28 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,568 | |
Container: container_1444274555723_0062_01_000001 on ip-10-169-170-124.ec2.internal_8041 | |
========================================================================================== | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:47318 | |
Log Contents: | |
log4j:ERROR Could not read configuration file from URL [file:/etc/spark/conf/log4j.properties]. | |
java.io.FileNotFoundException: /etc/spark/conf/log4j.properties (No such file or directory) | |
at java.io.FileInputStream.open(Native Method) | |
at java.io.FileInputStream.<init>(FileInputStream.java:146) | |
at java.io.FileInputStream.<init>(FileInputStream.java:101) | |
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) | |
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) | |
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557) | |
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) | |
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) | |
at org.apache.spark.Logging$class.initializeLogging(Logging.scala:122) | |
at org.apache.spark.Logging$class.initializeIfNecessary(Logging.scala:107) | |
at org.apache.spark.Logging$class.log(Logging.scala:51) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.log(ApplicationMaster.scala:603) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:617) | |
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) | |
log4j:ERROR Ignoring configuration file [file:/etc/spark/conf/log4j.properties]. | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/114/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:16:49 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:16:50 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1444274555723_0062_000001 | |
15/10/26 19:16:50 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:16:50 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:16:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:16:51 INFO ApplicationMaster: Starting the user application in a separate Thread | |
15/10/26 19:16:51 INFO ApplicationMaster: Waiting for spark context initialization | |
15/10/26 19:16:51 INFO ApplicationMaster: Waiting for spark context initialization ... | |
15/10/26 19:16:51 INFO SparkContext: Running Spark version 1.5.0 | |
15/10/26 19:16:51 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:16:51 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:16:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:16:51 INFO Slf4jLogger: Slf4jLogger started | |
15/10/26 19:16:51 INFO Remoting: Starting remoting | |
15/10/26 19:16:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:52900] | |
15/10/26 19:16:52 INFO Utils: Successfully started service 'sparkDriver' on port 52900. | |
15/10/26 19:16:52 INFO SparkEnv: Registering MapOutputTracker | |
15/10/26 19:16:52 INFO SparkEnv: Registering BlockManagerMaster | |
15/10/26 19:16:52 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-18de0316-2938-4cca-89a4-b25f8504a10e | |
15/10/26 19:16:52 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-782f01aa-c104-4823-becf-c3a28863714b | |
15/10/26 19:16:52 INFO MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:16:52 INFO HttpFileServer: HTTP File server directory is /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-18572d02-52d9-4c07-ab61-7ad0e46380e5/httpd-c3471d39-7d56-4a13-8b72-7bcec639f1eb | |
15/10/26 19:16:52 INFO HttpServer: Starting HTTP Server | |
15/10/26 19:16:52 INFO Utils: Successfully started service 'HTTP file server' on port 38809. | |
15/10/26 19:16:52 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/10/26 19:16:52 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter | |
15/10/26 19:16:52 INFO Utils: Successfully started service 'SparkUI' on port 59870. | |
15/10/26 19:16:52 INFO SparkUI: Started SparkUI at http://10.169.170.124:59870 | |
15/10/26 19:16:52 INFO YarnClusterScheduler: Created YarnClusterScheduler | |
15/10/26 19:16:52 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. | |
15/10/26 19:16:52 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 56734. | |
15/10/26 19:16:52 INFO NettyBlockTransferService: Server created on 56734 | |
15/10/26 19:16:52 INFO BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:16:52 INFO BlockManagerMasterEndpoint: Registering block manager 10.169.170.124:56734 with 535.0 MB RAM, BlockManagerId(driver, 10.169.170.124, 56734) | |
15/10/26 19:16:52 INFO BlockManagerMaster: Registered BlockManager | |
15/10/26 19:16:53 INFO MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:16:53 INFO MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-031cded7:ApplicationMaster:07298 period:60 /mnt/var/em/raw/i-031cded7_20151026_ApplicationMaster_07298_raw.bin | |
15/10/26 19:16:53 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1444274555723_0062_1 | |
15/10/26 19:16:54 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#-697789848]) | |
15/10/26 19:16:54 INFO RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8030 | |
15/10/26 19:16:54 INFO YarnRMClient: Registering the ApplicationMaster | |
15/10/26 19:16:54 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead | |
15/10/26 19:16:54 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/26 19:16:54 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/26 19:16:54 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals | |
15/10/26 19:16:54 INFO AMRMClientImpl: Received new token for : ip-10-169-170-124.ec2.internal:8041 | |
15/10/26 19:16:54 INFO AMRMClientImpl: Received new token for : ip-10-67-169-247.ec2.internal:8041 | |
15/10/26 19:16:54 INFO YarnAllocator: Launching container container_1444274555723_0062_01_000002 for on host ip-10-169-170-124.ec2.internal | |
15/10/26 19:16:54 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler, executorHostname: ip-10-169-170-124.ec2.internal | |
15/10/26 19:16:54 INFO YarnAllocator: Launching container container_1444274555723_0062_01_000003 for on host ip-10-67-169-247.ec2.internal | |
15/10/26 19:16:54 INFO ExecutorRunnable: Starting Executor Container | |
15/10/26 19:16:54 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler, executorHostname: ip-10-67-169-247.ec2.internal | |
15/10/26 19:16:54 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. | |
15/10/26 19:16:54 INFO ExecutorRunnable: Starting Executor Container | |
15/10/26 19:16:54 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/26 19:16:54 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/26 19:16:54 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/26 19:16:54 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/26 19:16:54 INFO ExecutorRunnable: Preparing Local resources | |
15/10/26 19:16:54 INFO ExecutorRunnable: Preparing Local resources | |
15/10/26 19:16:54 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar" } size: 162982714 timestamp: 1445887005973 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445887004647 type: FILE visibility: PRIVATE) | |
15/10/26 19:16:54 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar" } size: 162982714 timestamp: 1445887005973 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445887004647 type: FILE visibility: PRIVATE) | |
15/10/26 19:16:54 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_01_000002/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0062 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982714 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445887004647,1445887005973 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_01_000002/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52900' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler --executor-id 1 --hostname ip-10-169-170-124.ec2.internal --cores 1 --app-id application_1444274555723_0062 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/26 19:16:54 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_01_000003/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0062 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982714 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445887004647,1445887005973 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_01_000003/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52900' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler --executor-id 2 --hostname ip-10-67-169-247.ec2.internal --cores 1 --app-id application_1444274555723_0062 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/26 19:16:54 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-67-169-247.ec2.internal:8041 | |
15/10/26 19:16:54 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-169-170-124.ec2.internal:8041 | |
15/10/26 19:16:58 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:59654 | |
15/10/26 19:16:59 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:49483/user/Executor#-402690894]) with ID 1 | |
15/10/26 19:16:59 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-169-170-124.ec2.internal:55017 with 535.0 MB RAM, BlockManagerId(1, ip-10-169-170-124.ec2.internal, 55017) | |
15/10/26 19:17:00 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:55028 | |
15/10/26 19:17:01 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:47330/user/Executor#492490616]) with ID 2 | |
15/10/26 19:17:01 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 | |
15/10/26 19:17:01 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done | |
15/10/26 19:17:01 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-67-169-247.ec2.internal:42476 with 535.0 MB RAM, BlockManagerId(2, ip-10-67-169-247.ec2.internal, 42476) | |
15/10/26 19:17:02 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
15/10/26 19:17:02 INFO ClientWrapper: Inspected Hadoop version: 2.6.0-amzn-1 | |
15/10/26 19:17:02 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-amzn-1 | |
15/10/26 19:17:02 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/26 19:17:02 INFO ObjectStore: ObjectStore, initialize called | |
15/10/26 19:17:02 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/26 19:17:02 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/26 19:17:05 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/26 19:17:06 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:06 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:08 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:08 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:08 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/26 19:17:08 INFO ObjectStore: Initialized ObjectStore | |
15/10/26 19:17:09 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/26 19:17:09 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/26 19:17:09 INFO HiveMetaStore: Added admin role in metastore | |
15/10/26 19:17:09 INFO HiveMetaStore: Added public role in metastore | |
15/10/26 19:17:09 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/26 19:17:09 INFO HiveMetaStore: 0: get_all_databases | |
15/10/26 19:17:09 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases | |
15/10/26 19:17:09 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/26 19:17:09 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/26 19:17:09 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:09 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/yarn | |
15/10/26 19:17:09 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/b42bd7f7-2ace-41f2-b058-f710771d4577_resources | |
15/10/26 19:17:09 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/b42bd7f7-2ace-41f2-b058-f710771d4577 | |
15/10/26 19:17:09 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/yarn/b42bd7f7-2ace-41f2-b058-f710771d4577 | |
15/10/26 19:17:09 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/b42bd7f7-2ace-41f2-b058-f710771d4577/_tmp_space.db | |
15/10/26 19:17:09 INFO HiveContext: default warehouse location is /user/hive/warehouse | |
15/10/26 19:17:09 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. | |
15/10/26 19:17:09 INFO ClientWrapper: Inspected Hadoop version: 2.4.0 | |
15/10/26 19:17:09 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.0 | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/10/26 19:17:10 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/26 19:17:10 INFO ObjectStore: ObjectStore, initialize called | |
15/10/26 19:17:11 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/26 19:17:11 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/26 19:17:12 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:12 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:12 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:12 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/26 19:17:13 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:13 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:15 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:15 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:15 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/26 19:17:15 INFO ObjectStore: Initialized ObjectStore | |
15/10/26 19:17:15 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/26 19:17:15 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/26 19:17:15 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:16 INFO HiveMetaStore: Added admin role in metastore | |
15/10/26 19:17:16 INFO HiveMetaStore: Added public role in metastore | |
15/10/26 19:17:16 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/26 19:17:16 INFO HiveMetaStore: 0: get_all_databases | |
15/10/26 19:17:16 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_all_databases | |
15/10/26 19:17:16 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/26 19:17:16 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/26 19:17:16 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:16 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:16 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/bf40fce3-c7ec-4578-9621-ed24c1053de8_resources | |
15/10/26 19:17:16 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/bf40fce3-c7ec-4578-9621-ed24c1053de8 | |
15/10/26 19:17:16 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/yarn/bf40fce3-c7ec-4578-9621-ed24c1053de8 | |
15/10/26 19:17:16 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/bf40fce3-c7ec-4578-9621-ed24c1053de8/_tmp_space.db | |
15/10/26 19:17:20 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:17:20 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:17:20 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:17:20 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:17:20 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:17:20 INFO SparkContext: Starting job: saveAsTextFile at CLIJob.scala:96 | |
15/10/26 19:17:20 INFO DAGScheduler: Got job 0 (saveAsTextFile at CLIJob.scala:96) with 2 output partitions | |
15/10/26 19:17:20 INFO DAGScheduler: Final stage: ResultStage 0(saveAsTextFile at CLIJob.scala:96) | |
15/10/26 19:17:20 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:17:20 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:17:20 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at saveAsTextFile at CLIJob.scala:96), which has no missing parents | |
15/10/26 19:17:20 INFO MemoryStore: ensureFreeSpace(135712) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.9 MB) | |
15/10/26 19:17:20 INFO MemoryStore: ensureFreeSpace(47141) called with curMem=135712, maxMem=560993402 | |
15/10/26 19:17:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 534.8 MB) | |
15/10/26 19:17:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.169.170.124:56734 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:20 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:17:20 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at saveAsTextFile at CLIJob.scala:96) | |
15/10/26 19:17:20 INFO YarnClusterScheduler: Adding task set 0.0 with 2 tasks | |
15/10/26 19:17:20 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:17:20 WARN TaskSetManager: Stage 0 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:17:20 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:17:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-67-169-247.ec2.internal:42476 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-169-170-124.ec2.internal:55017 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:22 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1690 ms on ip-10-67-169-247.ec2.internal (1/2) | |
15/10/26 19:17:22 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1978 ms on ip-10-169-170-124.ec2.internal (2/2) | |
15/10/26 19:17:22 INFO DAGScheduler: ResultStage 0 (saveAsTextFile at CLIJob.scala:96) finished in 2.002 s | |
15/10/26 19:17:22 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
15/10/26 19:17:22 INFO DAGScheduler: Job 0 finished: saveAsTextFile at CLIJob.scala:96, took 2.226595 s | |
15/10/26 19:17:23 INFO SparkContext: Starting job: json at CLIJob.scala:104 | |
15/10/26 19:17:23 INFO DAGScheduler: Got job 1 (json at CLIJob.scala:104) with 2 output partitions | |
15/10/26 19:17:23 INFO DAGScheduler: Final stage: ResultStage 1(json at CLIJob.scala:104) | |
15/10/26 19:17:23 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:17:23 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:17:23 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at json at CLIJob.scala:104), which has no missing parents | |
15/10/26 19:17:23 INFO MemoryStore: ensureFreeSpace(3776) called with curMem=182853, maxMem=560993402 | |
15/10/26 19:17:23 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 534.8 MB) | |
15/10/26 19:17:23 INFO MemoryStore: ensureFreeSpace(2070) called with curMem=186629, maxMem=560993402 | |
15/10/26 19:17:23 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 534.8 MB) | |
15/10/26 19:17:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.169.170.124:56734 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:17:23 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at json at CLIJob.scala:104) | |
15/10/26 19:17:23 INFO YarnClusterScheduler: Adding task set 1.0 with 2 tasks | |
15/10/26 19:17:23 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:17:23 WARN TaskSetManager: Stage 1 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:17:23 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:17:23 INFO ContextCleaner: Cleaned accumulator 1 | |
15/10/26 19:17:23 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.169.170.124:56734 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-67-169-247.ec2.internal:42476 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-169-170-124.ec2.internal:55017 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-169-170-124.ec2.internal:55017 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-67-169-247.ec2.internal:42476 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:23 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 179 ms on ip-10-169-170-124.ec2.internal (1/2) | |
15/10/26 19:17:25 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 2361 ms on ip-10-67-169-247.ec2.internal (2/2) | |
15/10/26 19:17:25 INFO DAGScheduler: ResultStage 1 (json at CLIJob.scala:104) finished in 2.363 s | |
15/10/26 19:17:25 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool | |
15/10/26 19:17:25 INFO DAGScheduler: Job 1 finished: json at CLIJob.scala:104, took 2.455595 s | |
15/10/26 19:17:25 INFO ContextCleaner: Cleaned accumulator 2 | |
15/10/26 19:17:25 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.169.170.124:56734 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:25 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-169-170-124.ec2.internal:55017 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:25 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-67-169-247.ec2.internal:42476 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:17:25 INFO MemoryStore: ensureFreeSpace(93288) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:25 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 91.1 KB, free 534.9 MB) | |
15/10/26 19:17:25 INFO MemoryStore: ensureFreeSpace(21698) called with curMem=93288, maxMem=560993402 | |
15/10/26 19:17:25 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.2 KB, free 534.9 MB) | |
15/10/26 19:17:25 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.169.170.124:56734 (size: 21.2 KB, free: 535.0 MB) | |
15/10/26 19:17:25 INFO SparkContext: Created broadcast 2 from parquet at CLIJob.scala:108 | |
15/10/26 19:17:25 INFO ParquetRelation: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:17:26 INFO DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:17:26 INFO SparkContext: Starting job: parquet at CLIJob.scala:108 | |
15/10/26 19:17:26 INFO DAGScheduler: Got job 2 (parquet at CLIJob.scala:108) with 2 output partitions | |
15/10/26 19:17:26 INFO DAGScheduler: Final stage: ResultStage 2(parquet at CLIJob.scala:108) | |
15/10/26 19:17:26 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:17:26 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:17:26 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[6] at parquet at CLIJob.scala:108), which has no missing parents | |
15/10/26 19:17:26 INFO MemoryStore: ensureFreeSpace(82904) called with curMem=114986, maxMem=560993402 | |
15/10/26 19:17:26 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.8 MB) | |
15/10/26 19:17:26 INFO MemoryStore: ensureFreeSpace(29338) called with curMem=197890, maxMem=560993402 | |
15/10/26 19:17:26 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 534.8 MB) | |
15/10/26 19:17:26 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.169.170.124:56734 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:26 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:17:26 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 2 (MapPartitionsRDD[6] at parquet at CLIJob.scala:108) | |
15/10/26 19:17:26 INFO YarnClusterScheduler: Adding task set 2.0 with 2 tasks | |
15/10/26 19:17:26 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 4, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:17:26 WARN TaskSetManager: Stage 2 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:17:26 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 5, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:17:26 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-10-169-170-124.ec2.internal:55017 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:26 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-10-67-169-247.ec2.internal:42476 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:27 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 5) in 1700 ms on ip-10-67-169-247.ec2.internal (1/2) | |
15/10/26 19:17:28 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 4) in 2343 ms on ip-10-169-170-124.ec2.internal (2/2) | |
15/10/26 19:17:28 INFO DAGScheduler: ResultStage 2 (parquet at CLIJob.scala:108) finished in 2.344 s | |
15/10/26 19:17:28 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool | |
15/10/26 19:17:28 INFO DAGScheduler: Job 2 finished: parquet at CLIJob.scala:108, took 2.388523 s | |
15/10/26 19:17:28 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.169.170.124:56734 in memory (size: 21.2 KB, free: 535.0 MB) | |
15/10/26 19:17:28 INFO ContextCleaner: Cleaned accumulator 3 | |
15/10/26 19:17:28 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 10.169.170.124:56734 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:28 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ip-10-169-170-124.ec2.internal:55017 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:28 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ip-10-67-169-247.ec2.internal:42476 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:17:28 INFO DefaultWriterContainer: Job job_201510261917_0000 committed. | |
15/10/26 19:17:28 INFO ParquetRelation: Listing hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet on driver | |
15/10/26 19:17:28 INFO ParquetRelation: Listing hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet on driver | |
15/10/26 19:17:29 INFO ParseDriver: Parsing command: select x.id, x.title, x.description, x.mediaavailableDate as available_date, x.mediaexpirationDate as expiration_date, mediacategories.medianame as media_name, x.mediakeywords as keywords, mediaratings.scheme as rating_scheme, mediaratings.rating, cast(mediaratings.subRatings as String) as sub_ratings, content.plfileduration as duration, x.plmediaprovider as provider, x.ngccontentAdType as ad_type, x.ngcepisodeNumber as episode, ngcnetwork as network, x.ngcseasonNumber as season_number, x.ngcuID as ngc_uid, x.ngcvideoType as video_type from etl lateral view explode(entries) entries as x lateral view explode(x.mediacategories) cat as mediacategories lateral view explode(x.mediaratings) r as mediaratings lateral view explode(x.mediacontent) mediacontent as content lateral view outer explode(x.ngcnetwork) net as ngcnetworkr | |
15/10/26 19:17:30 INFO ParseDriver: Parse Completed | |
15/10/26 19:17:30 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433 | |
org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433 | |
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:56) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:53) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:293) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:293) | |
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:292) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:249) | |
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) | |
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) | |
at scala.collection.AbstractIterator.to(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) | |
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) | |
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:279) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:108) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:118) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:122) | |
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) | |
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) | |
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) | |
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) | |
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) | |
at scala.collection.AbstractTraversable.map(Traversable.scala:105) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:122) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:126) | |
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) | |
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) | |
at scala.collection.AbstractIterator.to(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) | |
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) | |
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:126) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:53) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:49) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:103) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:49) | |
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44) | |
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:908) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:132) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:719) | |
at com.truex.prometheus.CLIJob$$anon$1.execute(CLIJob.scala:114) | |
at com.truex.prometheus.CLIJob$.main(CLIJob.scala:122) | |
at com.truex.prometheus.CLIJob.main(CLIJob.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525) | |
15/10/26 19:17:30 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433) | |
15/10/26 19:17:30 INFO SparkContext: Invoking stop() from shutdown hook | |
15/10/26 19:17:30 INFO SparkUI: Stopped Spark web UI at http://10.169.170.124:59870 | |
15/10/26 19:17:30 INFO DAGScheduler: Stopping DAGScheduler | |
15/10/26 19:17:30 INFO YarnClusterSchedulerBackend: Shutting down all executors | |
15/10/26 19:17:30 INFO YarnClusterSchedulerBackend: Asking each executor to shut down | |
15/10/26 19:17:30 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:47330 | |
15/10/26 19:17:30 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:49483 | |
15/10/26 19:17:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
15/10/26 19:17:30 INFO MemoryStore: MemoryStore cleared | |
15/10/26 19:17:30 INFO BlockManager: BlockManager stopped | |
15/10/26 19:17:30 INFO BlockManagerMaster: BlockManagerMaster stopped | |
15/10/26 19:17:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
15/10/26 19:17:30 INFO SparkContext: Successfully stopped SparkContext | |
15/10/26 19:17:30 INFO ShutdownHookManager: Shutdown hook called | |
15/10/26 19:17:30 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-18572d02-52d9-4c07-ab61-7ad0e46380e5 | |
15/10/26 19:17:30 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_01_000001/tmp/spark-157a79bf-0a2b-44b2-873e-17f4a5d7a952 | |
15/10/26 19:17:30 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:30 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:30 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-2fd7e113-616d-42dc-9218-113d4d5f0489 | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:20 | |
LogLength:0 | |
Log Contents: | |
15/10/26 19:19:40 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:19:40 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-131cdec7:LogsCLI:25072 period:60 /mnt/var/em/raw/i-131cdec7_20151026_LogsCLI_25072_raw.bin | |
Container: container_1444274555723_0062_02_000001 on ip-10-67-169-247.ec2.internal_8041 | |
========================================================================================= | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:47936 | |
Log Contents: | |
log4j:ERROR Could not read configuration file from URL [file:/etc/spark/conf/log4j.properties]. | |
java.io.FileNotFoundException: /etc/spark/conf/log4j.properties (No such file or directory) | |
at java.io.FileInputStream.open(Native Method) | |
at java.io.FileInputStream.<init>(FileInputStream.java:146) | |
at java.io.FileInputStream.<init>(FileInputStream.java:101) | |
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) | |
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) | |
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557) | |
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) | |
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) | |
at org.apache.spark.Logging$class.initializeLogging(Logging.scala:122) | |
at org.apache.spark.Logging$class.initializeIfNecessary(Logging.scala:107) | |
at org.apache.spark.Logging$class.log(Logging.scala:51) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.log(ApplicationMaster.scala:603) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:617) | |
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) | |
log4j:ERROR Ignoring configuration file [file:/etc/spark/conf/log4j.properties]. | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/113/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:17:32 INFO ApplicationMaster: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:17:34 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1444274555723_0062_000002 | |
15/10/26 19:17:34 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:34 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:35 INFO ApplicationMaster: Starting the user application in a separate Thread | |
15/10/26 19:17:35 INFO ApplicationMaster: Waiting for spark context initialization | |
15/10/26 19:17:35 INFO ApplicationMaster: Waiting for spark context initialization ... | |
15/10/26 19:17:35 INFO SparkContext: Running Spark version 1.5.0 | |
15/10/26 19:17:35 INFO SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:35 INFO SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:35 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:35 INFO Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:35 INFO Remoting: Starting remoting | |
15/10/26 19:17:36 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:48914] | |
15/10/26 19:17:36 INFO Utils: Successfully started service 'sparkDriver' on port 48914. | |
15/10/26 19:17:36 INFO SparkEnv: Registering MapOutputTracker | |
15/10/26 19:17:36 INFO SparkEnv: Registering BlockManagerMaster | |
15/10/26 19:17:36 INFO DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-3fb51d26-d020-451e-8781-db5cbc517e2d | |
15/10/26 19:17:36 INFO DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-98aea5d9-ffc8-4b8c-af17-b411d2351a90 | |
15/10/26 19:17:36 INFO MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:17:36 INFO HttpFileServer: HTTP File server directory is /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-89103774-dc68-4e2a-b1e3-5997fbb7021c/httpd-de0986d2-9d90-445e-95e6-27cfc9a3b427 | |
15/10/26 19:17:36 INFO HttpServer: Starting HTTP Server | |
15/10/26 19:17:36 INFO Utils: Successfully started service 'HTTP file server' on port 45430. | |
15/10/26 19:17:36 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/10/26 19:17:36 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter | |
15/10/26 19:17:36 INFO Utils: Successfully started service 'SparkUI' on port 55601. | |
15/10/26 19:17:36 INFO SparkUI: Started SparkUI at http://10.67.169.247:55601 | |
15/10/26 19:17:36 INFO YarnClusterScheduler: Created YarnClusterScheduler | |
15/10/26 19:17:36 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. | |
15/10/26 19:17:36 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33062. | |
15/10/26 19:17:36 INFO NettyBlockTransferService: Server created on 33062 | |
15/10/26 19:17:36 INFO BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:17:36 INFO BlockManagerMasterEndpoint: Registering block manager 10.67.169.247:33062 with 535.0 MB RAM, BlockManagerId(driver, 10.67.169.247, 33062) | |
15/10/26 19:17:36 INFO BlockManagerMaster: Registered BlockManager | |
15/10/26 19:17:38 INFO MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:17:38 INFO MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-021cded6:ApplicationMaster:07453 period:60 /mnt/var/em/raw/i-021cded6_20151026_ApplicationMaster_07453_raw.bin | |
15/10/26 19:17:38 INFO EventLoggingListener: Logging events to hdfs:///var/log/spark/apps/application_1444274555723_0062_2 | |
15/10/26 19:17:38 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#1430431361]) | |
15/10/26 19:17:38 INFO RMProxy: Connecting to ResourceManager at ip-10-65-200-150.ec2.internal/10.65.200.150:8030 | |
15/10/26 19:17:38 INFO YarnRMClient: Registering the ApplicationMaster | |
15/10/26 19:17:38 INFO YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead | |
15/10/26 19:17:38 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/26 19:17:38 INFO YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>) | |
15/10/26 19:17:38 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals | |
15/10/26 19:17:39 INFO AMRMClientImpl: Received new token for : ip-10-67-169-247.ec2.internal:8041 | |
15/10/26 19:17:39 INFO AMRMClientImpl: Received new token for : ip-10-169-170-124.ec2.internal:8041 | |
15/10/26 19:17:39 INFO YarnAllocator: Launching container container_1444274555723_0062_02_000002 for on host ip-10-67-169-247.ec2.internal | |
15/10/26 19:17:39 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler, executorHostname: ip-10-67-169-247.ec2.internal | |
15/10/26 19:17:39 INFO YarnAllocator: Launching container container_1444274555723_0062_02_000003 for on host ip-10-169-170-124.ec2.internal | |
15/10/26 19:17:39 INFO ExecutorRunnable: Starting Executor Container | |
15/10/26 19:17:39 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler, executorHostname: ip-10-169-170-124.ec2.internal | |
15/10/26 19:17:39 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them. | |
15/10/26 19:17:39 INFO ExecutorRunnable: Starting Executor Container | |
15/10/26 19:17:39 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/26 19:17:39 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 | |
15/10/26 19:17:39 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/26 19:17:39 INFO ExecutorRunnable: Setting up ContainerLaunchContext | |
15/10/26 19:17:39 INFO ExecutorRunnable: Preparing Local resources | |
15/10/26 19:17:39 INFO ExecutorRunnable: Preparing Local resources | |
15/10/26 19:17:39 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar" } size: 162982714 timestamp: 1445887005973 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445887004647 type: FILE visibility: PRIVATE) | |
15/10/26 19:17:39 INFO ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar" } size: 162982714 timestamp: 1445887005973 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "ip-10-65-200-150.ec2.internal" port: 8020 file: "/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar" } size: 206949550 timestamp: 1445887004647 type: FILE visibility: PRIVATE) | |
15/10/26 19:17:39 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_02_000002/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0062 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982714 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445887004647,1445887005973 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-67-169-247.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_02_000002/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=48914' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler --executor-id 1 --hostname ip-10-67-169-247.ec2.internal --cores 1 --app-id application_1444274555723_0062 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/26 19:17:39 INFO ExecutorRunnable: | |
=============================================================================== | |
YARN executor launch context: | |
env: | |
CLASSPATH -> /etc/hadoop/conf:/etc/hive/conf:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/*:/usr/lib/hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*<CPS>{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/*<CPS>$HADOOP_COMMON_HOME/lib/*<CPS>$HADOOP_HDFS_HOME/*<CPS>$HADOOP_HDFS_HOME/lib/*<CPS>$HADOOP_MAPRED_HOME/*<CPS>$HADOOP_MAPRED_HOME/lib/*<CPS>$HADOOP_YARN_HOME/*<CPS>$HADOOP_YARN_HOME/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/share/aws/emr/emrfs/conf<CPS>/usr/share/aws/emr/emrfs/lib/*<CPS>/usr/share/aws/emr/emrfs/auxlib/*<CPS>/usr/share/aws/emr/lib/*<CPS>/usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar<CPS>/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies.jar<CPS>/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop.jar<CPS>/usr/share/aws/emr/cloudwatch-sink/lib/* | |
SPARK_LOG_URL_STDERR -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_02_000003/hadoop/stderr?start=-4096 | |
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1444274555723_0062 | |
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 206949550,162982714 | |
SPARK_USER -> hadoop | |
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE | |
SPARK_YARN_MODE -> true | |
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1445887004647,1445887005973 | |
SPARK_LOG_URL_STDOUT -> http://ip-10-169-170-124.ec2.internal:8042/node/containerlogs/container_1444274555723_0062_02_000003/hadoop/stdout?start=-4096 | |
SPARK_YARN_CACHE_FILES -> hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar#__spark__.jar,hdfs://ip-10-65-200-150.ec2.internal:8020/user/hadoop/.sparkStaging/application_1444274555723_0062/Prometheus-assembly-0.0.1.jar#__app__.jar | |
command: | |
LD_LIBRARY_PATH="/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:$LD_LIBRARY_PATH" {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-verbose:gc' '-XX:+PrintGCDetails' '-XX:+PrintGCDateStamps' '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=48914' '-Dspark.history.ui.port=18080' '-Dspark.ui.port=0' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler --executor-id 2 --hostname ip-10-169-170-124.ec2.internal --cores 1 --app-id application_1444274555723_0062 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr | |
=============================================================================== | |
15/10/26 19:17:39 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-67-169-247.ec2.internal:8041 | |
15/10/26 19:17:39 INFO ContainerManagementProtocolProxy: Opening proxy : ip-10-169-170-124.ec2.internal:8041 | |
15/10/26 19:17:42 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:36622 | |
15/10/26 19:17:43 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:42150 | |
15/10/26 19:17:43 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:45900/user/Executor#-2010341503]) with ID 1 | |
15/10/26 19:17:43 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-67-169-247.ec2.internal:51150 with 535.0 MB RAM, BlockManagerId(1, ip-10-67-169-247.ec2.internal, 51150) | |
15/10/26 19:17:43 INFO YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:58140/user/Executor#642318444]) with ID 2 | |
15/10/26 19:17:43 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 | |
15/10/26 19:17:43 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done | |
15/10/26 19:17:44 INFO BlockManagerMasterEndpoint: Registering block manager ip-10-169-170-124.ec2.internal:50135 with 535.0 MB RAM, BlockManagerId(2, ip-10-169-170-124.ec2.internal, 50135) | |
15/10/26 19:17:44 INFO HiveContext: Initializing execution hive, version 1.2.1 | |
15/10/26 19:17:45 INFO ClientWrapper: Inspected Hadoop version: 2.6.0-amzn-1 | |
15/10/26 19:17:45 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-amzn-1 | |
15/10/26 19:17:45 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/26 19:17:45 INFO ObjectStore: ObjectStore, initialize called | |
15/10/26 19:17:45 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/26 19:17:45 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/26 19:17:48 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/26 19:17:49 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:49 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:51 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:51 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/26 19:17:51 INFO ObjectStore: Initialized ObjectStore | |
15/10/26 19:17:51 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/26 19:17:51 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/26 19:17:51 INFO HiveMetaStore: Added admin role in metastore | |
15/10/26 19:17:51 INFO HiveMetaStore: Added public role in metastore | |
15/10/26 19:17:52 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/26 19:17:52 INFO HiveMetaStore: 0: get_all_databases | |
15/10/26 19:17:52 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases | |
15/10/26 19:17:52 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/26 19:17:52 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/26 19:17:52 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:52 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/yarn | |
15/10/26 19:17:52 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/585fc738-c3e1-4f09-bb18-7d5e8837800a_resources | |
15/10/26 19:17:52 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/585fc738-c3e1-4f09-bb18-7d5e8837800a | |
15/10/26 19:17:52 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/yarn/585fc738-c3e1-4f09-bb18-7d5e8837800a | |
15/10/26 19:17:52 INFO SessionState: Created HDFS directory: /tmp/hive/hadoop/585fc738-c3e1-4f09-bb18-7d5e8837800a/_tmp_space.db | |
15/10/26 19:17:52 INFO HiveContext: default warehouse location is /user/hive/warehouse | |
15/10/26 19:17:52 INFO HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. | |
15/10/26 19:17:52 INFO ClientWrapper: Inspected Hadoop version: 2.4.0 | |
15/10/26 19:17:52 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.4.0 | |
15/10/26 19:17:52 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/10/26 19:17:53 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/10/26 19:17:53 INFO ObjectStore: ObjectStore, initialize called | |
15/10/26 19:17:53 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/10/26 19:17:53 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/10/26 19:17:55 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:55 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:55 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:55 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/10/26 19:17:56 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:56 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:57 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:57 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:58 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY | |
15/10/26 19:17:58 INFO ObjectStore: Initialized ObjectStore | |
15/10/26 19:17:58 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
15/10/26 19:17:58 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
15/10/26 19:17:58 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:58 INFO HiveMetaStore: Added admin role in metastore | |
15/10/26 19:17:58 INFO HiveMetaStore: Added public role in metastore | |
15/10/26 19:17:59 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/10/26 19:17:59 INFO HiveMetaStore: 0: get_all_databases | |
15/10/26 19:17:59 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_all_databases | |
15/10/26 19:17:59 INFO HiveMetaStore: 0: get_functions: db=default pat=* | |
15/10/26 19:17:59 INFO audit: ugi=yarn ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/10/26 19:17:59 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
15/10/26 19:17:59 WARN Configuration: mapred-site.xml:an attempt to override final parameter: mapreduce.cluster.local.dir; Ignoring. | |
15/10/26 19:17:59 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/773bfc83-ecd3-40f2-97f0-a95a53260262_resources | |
15/10/26 19:17:59 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/773bfc83-ecd3-40f2-97f0-a95a53260262 | |
15/10/26 19:17:59 INFO SessionState: Created local directory: /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/yarn/773bfc83-ecd3-40f2-97f0-a95a53260262 | |
15/10/26 19:17:59 INFO SessionState: Created HDFS directory: /tmp/hive/yarn/773bfc83-ecd3-40f2-97f0-a95a53260262/_tmp_space.db | |
15/10/26 19:18:08 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:18:08 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:18:08 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:18:08 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:18:08 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:18:08 INFO SparkContext: Starting job: saveAsTextFile at CLIJob.scala:96 | |
15/10/26 19:18:08 INFO DAGScheduler: Got job 0 (saveAsTextFile at CLIJob.scala:96) with 2 output partitions | |
15/10/26 19:18:08 INFO DAGScheduler: Final stage: ResultStage 0(saveAsTextFile at CLIJob.scala:96) | |
15/10/26 19:18:08 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:18:08 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:18:08 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at saveAsTextFile at CLIJob.scala:96), which has no missing parents | |
15/10/26 19:18:08 INFO MemoryStore: ensureFreeSpace(135712) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.9 MB) | |
15/10/26 19:18:08 INFO MemoryStore: ensureFreeSpace(47141) called with curMem=135712, maxMem=560993402 | |
15/10/26 19:18:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 534.8 MB) | |
15/10/26 19:18:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.67.169.247:33062 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:08 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:18:08 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at saveAsTextFile at CLIJob.scala:96) | |
15/10/26 19:18:08 INFO YarnClusterScheduler: Adding task set 0.0 with 2 tasks | |
15/10/26 19:18:08 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:18:08 WARN TaskSetManager: Stage 0 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:18:08 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:18:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-169-170-124.ec2.internal:50135 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-67-169-247.ec2.internal:51150 (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1695 ms on ip-10-169-170-124.ec2.internal (1/2) | |
15/10/26 19:18:10 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 1679 ms on ip-10-67-169-247.ec2.internal (2/2) | |
15/10/26 19:18:10 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
15/10/26 19:18:10 INFO DAGScheduler: ResultStage 0 (saveAsTextFile at CLIJob.scala:96) finished in 1.705 s | |
15/10/26 19:18:10 INFO DAGScheduler: Job 0 finished: saveAsTextFile at CLIJob.scala:96, took 1.937848 s | |
15/10/26 19:18:10 INFO ContextCleaner: Cleaned accumulator 1 | |
15/10/26 19:18:10 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-169-170-124.ec2.internal:50135 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO BlockManagerInfo: Removed broadcast_0_piece0 on ip-10-67-169-247.ec2.internal:51150 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 10.67.169.247:33062 in memory (size: 46.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO SparkContext: Starting job: json at CLIJob.scala:104 | |
15/10/26 19:18:10 INFO DAGScheduler: Got job 1 (json at CLIJob.scala:104) with 2 output partitions | |
15/10/26 19:18:10 INFO DAGScheduler: Final stage: ResultStage 1(json at CLIJob.scala:104) | |
15/10/26 19:18:10 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:18:10 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:18:10 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[5] at json at CLIJob.scala:104), which has no missing parents | |
15/10/26 19:18:10 INFO MemoryStore: ensureFreeSpace(3776) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:10 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 535.0 MB) | |
15/10/26 19:18:10 INFO MemoryStore: ensureFreeSpace(2070) called with curMem=3776, maxMem=560993402 | |
15/10/26 19:18:10 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 535.0 MB) | |
15/10/26 19:18:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.67.169.247:33062 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:18:10 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[5] at json at CLIJob.scala:104) | |
15/10/26 19:18:10 INFO YarnClusterScheduler: Adding task set 1.0 with 2 tasks | |
15/10/26 19:18:10 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:18:10 WARN TaskSetManager: Stage 1 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:18:10 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:18:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-67-169-247.ec2.internal:51150 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on ip-10-169-170-124.ec2.internal:50135 (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:10 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 100 ms on ip-10-67-169-247.ec2.internal (1/2) | |
15/10/26 19:18:12 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 2516 ms on ip-10-169-170-124.ec2.internal (2/2) | |
15/10/26 19:18:12 INFO DAGScheduler: ResultStage 1 (json at CLIJob.scala:104) finished in 2.518 s | |
15/10/26 19:18:12 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool | |
15/10/26 19:18:13 INFO DAGScheduler: Job 1 finished: json at CLIJob.scala:104, took 2.662583 s | |
15/10/26 19:18:13 INFO ContextCleaner: Cleaned accumulator 2 | |
15/10/26 19:18:13 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 10.67.169.247:33062 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:13 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-67-169-247.ec2.internal:51150 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:13 INFO BlockManagerInfo: Removed broadcast_1_piece0 on ip-10-169-170-124.ec2.internal:50135 in memory (size: 2.0 KB, free: 535.0 MB) | |
15/10/26 19:18:13 INFO MemoryStore: ensureFreeSpace(93288) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:13 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 91.1 KB, free 534.9 MB) | |
15/10/26 19:18:13 INFO MemoryStore: ensureFreeSpace(21698) called with curMem=93288, maxMem=560993402 | |
15/10/26 19:18:13 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.2 KB, free 534.9 MB) | |
15/10/26 19:18:13 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.67.169.247:33062 (size: 21.2 KB, free: 535.0 MB) | |
15/10/26 19:18:13 INFO SparkContext: Created broadcast 2 from parquet at CLIJob.scala:108 | |
15/10/26 19:18:13 INFO ParquetRelation: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:18:13 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 10.67.169.247:33062 in memory (size: 21.2 KB, free: 535.0 MB) | |
15/10/26 19:18:13 INFO DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:18:14 INFO SparkContext: Starting job: parquet at CLIJob.scala:108 | |
15/10/26 19:18:14 INFO DAGScheduler: Got job 2 (parquet at CLIJob.scala:108) with 2 output partitions | |
15/10/26 19:18:14 INFO DAGScheduler: Final stage: ResultStage 2(parquet at CLIJob.scala:108) | |
15/10/26 19:18:14 INFO DAGScheduler: Parents of final stage: List() | |
15/10/26 19:18:14 INFO DAGScheduler: Missing parents: List() | |
15/10/26 19:18:14 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[6] at parquet at CLIJob.scala:108), which has no missing parents | |
15/10/26 19:18:14 INFO MemoryStore: ensureFreeSpace(82904) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:14 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.9 MB) | |
15/10/26 19:18:14 INFO MemoryStore: ensureFreeSpace(29341) called with curMem=82904, maxMem=560993402 | |
15/10/26 19:18:14 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 534.9 MB) | |
15/10/26 19:18:14 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.67.169.247:33062 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:14 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:861 | |
15/10/26 19:18:14 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 2 (MapPartitionsRDD[6] at parquet at CLIJob.scala:108) | |
15/10/26 19:18:14 INFO YarnClusterScheduler: Adding task set 2.0 with 2 tasks | |
15/10/26 19:18:14 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 4, ip-10-169-170-124.ec2.internal, PROCESS_LOCAL, 2087 bytes) | |
15/10/26 19:18:14 WARN TaskSetManager: Stage 2 contains a task of very large size (188 KB). The maximum recommended task size is 100 KB. | |
15/10/26 19:18:14 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 5, ip-10-67-169-247.ec2.internal, PROCESS_LOCAL, 193533 bytes) | |
15/10/26 19:18:14 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-10-169-170-124.ec2.internal:50135 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:14 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-10-67-169-247.ec2.internal:51150 (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:15 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 4) in 1193 ms on ip-10-169-170-124.ec2.internal (1/2) | |
15/10/26 19:18:16 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 5) in 2421 ms on ip-10-67-169-247.ec2.internal (2/2) | |
15/10/26 19:18:16 INFO DAGScheduler: ResultStage 2 (parquet at CLIJob.scala:108) finished in 2.422 s | |
15/10/26 19:18:16 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool | |
15/10/26 19:18:16 INFO DAGScheduler: Job 2 finished: parquet at CLIJob.scala:108, took 2.486127 s | |
15/10/26 19:18:16 INFO DefaultWriterContainer: Job job_201510261918_0000 committed. | |
15/10/26 19:18:16 INFO ParquetRelation: Listing hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet on driver | |
15/10/26 19:18:16 INFO ParquetRelation: Listing hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet on driver | |
15/10/26 19:18:17 INFO ContextCleaner: Cleaned accumulator 3 | |
15/10/26 19:18:17 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 10.67.169.247:33062 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:17 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ip-10-169-170-124.ec2.internal:50135 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:17 INFO ParseDriver: Parsing command: select x.id, x.title, x.description, x.mediaavailableDate as available_date, x.mediaexpirationDate as expiration_date, mediacategories.medianame as media_name, x.mediakeywords as keywords, mediaratings.scheme as rating_scheme, mediaratings.rating, cast(mediaratings.subRatings as String) as sub_ratings, content.plfileduration as duration, x.plmediaprovider as provider, x.ngccontentAdType as ad_type, x.ngcepisodeNumber as episode, ngcnetwork as network, x.ngcseasonNumber as season_number, x.ngcuID as ngc_uid, x.ngcvideoType as video_type from etl lateral view explode(entries) entries as x lateral view explode(x.mediacategories) cat as mediacategories lateral view explode(x.mediaratings) r as mediaratings lateral view explode(x.mediacontent) mediacontent as content lateral view outer explode(x.ngcnetwork) net as ngcnetworkr | |
15/10/26 19:18:17 INFO BlockManagerInfo: Removed broadcast_3_piece0 on ip-10-67-169-247.ec2.internal:51150 in memory (size: 28.7 KB, free: 535.0 MB) | |
15/10/26 19:18:17 INFO ParseDriver: Parse Completed | |
15/10/26 19:18:18 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433 | |
org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433 | |
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:56) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:53) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:293) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:293) | |
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:292) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:249) | |
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) | |
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) | |
at scala.collection.AbstractIterator.to(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) | |
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) | |
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:279) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:290) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:108) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:118) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:122) | |
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) | |
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) | |
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) | |
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) | |
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) | |
at scala.collection.AbstractTraversable.map(Traversable.scala:105) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:122) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:126) | |
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:727) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) | |
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) | |
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) | |
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) | |
at scala.collection.AbstractIterator.to(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) | |
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) | |
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) | |
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) | |
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:126) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:53) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:49) | |
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:103) | |
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:49) | |
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44) | |
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:908) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:132) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:719) | |
at com.truex.prometheus.CLIJob$$anon$1.execute(CLIJob.scala:114) | |
at com.truex.prometheus.CLIJob$.main(CLIJob.scala:122) | |
at com.truex.prometheus.CLIJob.main(CLIJob.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525) | |
15/10/26 19:18:18 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433) | |
15/10/26 19:18:18 INFO SparkContext: Invoking stop() from shutdown hook | |
15/10/26 19:18:18 INFO SparkUI: Stopped Spark web UI at http://10.67.169.247:55601 | |
15/10/26 19:18:18 INFO DAGScheduler: Stopping DAGScheduler | |
15/10/26 19:18:18 INFO YarnClusterSchedulerBackend: Shutting down all executors | |
15/10/26 19:18:18 INFO YarnClusterSchedulerBackend: Asking each executor to shut down | |
15/10/26 19:18:18 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-67-169-247.ec2.internal:45900 | |
15/10/26 19:18:18 INFO ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. ip-10-169-170-124.ec2.internal:58140 | |
15/10/26 19:18:18 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
15/10/26 19:18:18 INFO MemoryStore: MemoryStore cleared | |
15/10/26 19:18:18 INFO BlockManager: BlockManager stopped | |
15/10/26 19:18:18 INFO BlockManagerMaster: BlockManagerMaster stopped | |
15/10/26 19:18:18 INFO SparkContext: Successfully stopped SparkContext | |
15/10/26 19:18:18 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
15/10/26 19:18:18 INFO ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: org.apache.spark.sql.AnalysisException: cannot resolve 'ngcnetwork' given input columns entries, entryCount, $xmlns, content, mediaratings, ngcnetworkr, mediacategories, x, itemsPerPage, startIndex, title; line 1 pos 433) | |
15/10/26 19:18:18 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:18:18 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:18:18 INFO AMRMClientImpl: Waiting for application to be successfully unregistered. | |
15/10/26 19:18:18 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:18:18 INFO ApplicationMaster: Deleting staging directory .sparkStaging/application_1444274555723_0062 | |
15/10/26 19:18:18 INFO ShutdownHookManager: Shutdown hook called | |
15/10/26 19:18:18 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/container_1444274555723_0062_02_000001/tmp/spark-f6b32d95-6b26-4888-a3ec-7d4d8ea3d241 | |
15/10/26 19:18:18 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-89103774-dc68-4e2a-b1e3-5997fbb7021c | |
15/10/26 19:18:18 INFO ShutdownHookManager: Deleting directory /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/spark-79ae0c07-dbb7-4437-a053-e2797818f143 | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:0 | |
Log Contents: | |
Container: container_1444274555723_0062_01_000003 on ip-10-67-169-247.ec2.internal_8041 | |
========================================================================================= | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:9330 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/113/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:16:57 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:16:58 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:16:58 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:16:58 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:16:59 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:16:59 INFO Remoting: Starting remoting | |
15/10/26 19:16:59 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:55028] | |
15/10/26 19:16:59 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 55028. | |
15/10/26 19:17:00 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:00 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:00 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:00 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:00 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:00 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:00 INFO Remoting: Starting remoting | |
15/10/26 19:17:00 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:17:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:47330] | |
15/10/26 19:17:00 INFO util.Utils: Successfully started service 'sparkExecutor' on port 47330. | |
15/10/26 19:17:00 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-bfae1bb8-8c10-496e-ae5d-f353ce05d5fa | |
15/10/26 19:17:00 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-ef805e8a-116a-4988-8df8-03c1e65d517a | |
15/10/26 19:17:00 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:17:01 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://[email protected]:52900/user/CoarseGrainedScheduler | |
15/10/26 19:17:01 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/26 19:17:01 INFO executor.Executor: Starting executor ID 2 on host ip-10-67-169-247.ec2.internal | |
15/10/26 19:17:01 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42476. | |
15/10/26 19:17:01 INFO netty.NettyBlockTransferService: Server created on 42476 | |
15/10/26 19:17:01 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:17:01 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/26 19:17:01 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/26 19:17:20 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 0 | |
15/10/26 19:17:20 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0) | |
15/10/26 19:17:21 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0 | |
15/10/26 19:17:21 INFO storage.MemoryStore: ensureFreeSpace(47141) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:21 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 535.0 MB) | |
15/10/26 19:17:21 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 246 ms | |
15/10/26 19:17:21 INFO storage.MemoryStore: ensureFreeSpace(135712) called with curMem=47141, maxMem=560993402 | |
15/10/26 19:17:21 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.8 MB) | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:17:21 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:17:22 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:17:22 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-021cded6:CoarseGrainedExecutorBackend:07304 period:60 /mnt/var/em/raw/i-021cded6_20151026_CoarseGrainedExecutorBackend_07304_raw.bin | |
15/10/26 19:17:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261917_0000_m_000000_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.original/_temporary/0/task_201510261917_0000_m_000000 | |
15/10/26 19:17:22 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261917_0000_m_000000_0: Committed | |
15/10/26 19:17:22 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 1884 bytes result sent to driver | |
15/10/26 19:17:23 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 3 | |
15/10/26 19:17:23 INFO executor.Executor: Running task 1.0 in stage 1.0 (TID 3) | |
15/10/26 19:17:23 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1 | |
15/10/26 19:17:23 INFO storage.MemoryStore: ensureFreeSpace(2070) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:23 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 535.0 MB) | |
15/10/26 19:17:23 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 16 ms | |
15/10/26 19:17:23 INFO storage.MemoryStore: ensureFreeSpace(3776) called with curMem=2070, maxMem=560993402 | |
15/10/26 19:17:23 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 535.0 MB) | |
15/10/26 19:17:25 INFO executor.Executor: Finished task 1.0 in stage 1.0 (TID 3). 6394 bytes result sent to driver | |
15/10/26 19:17:26 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 5 | |
15/10/26 19:17:26 INFO executor.Executor: Running task 1.0 in stage 2.0 (TID 5) | |
15/10/26 19:17:26 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 3 | |
15/10/26 19:17:26 INFO storage.MemoryStore: ensureFreeSpace(29338) called with curMem=0, maxMem=560993402 | |
15/10/26 19:17:26 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 535.0 MB) | |
15/10/26 19:17:26 INFO broadcast.TorrentBroadcast: Reading broadcast variable 3 took 97 ms | |
15/10/26 19:17:26 INFO storage.MemoryStore: ensureFreeSpace(82904) called with curMem=29338, maxMem=560993402 | |
15/10/26 19:17:26 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.9 MB) | |
15/10/26 19:17:26 INFO datasources.DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:17:26 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library | |
15/10/26 19:17:26 INFO compress.CodecPool: Got brand-new compressor [.gz] | |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
15/10/26 19:17:27 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261917_0002_m_000001_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet/_temporary/0/task_201510261917_0002_m_000001 | |
15/10/26 19:17:27 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261917_0002_m_000001_0: Committed | |
15/10/26 19:17:27 INFO executor.Executor: Finished task 1.0 in stage 2.0 (TID 5). 935 bytes result sent to driver | |
15/10/26 19:17:30 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/26 19:17:30 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/26 19:17:30 INFO storage.BlockManager: BlockManager stopped | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:30 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:17:30 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:15373 | |
Log Contents: | |
2015-10-26T19:16:59.314+0000: [GC2015-10-26T19:16:59.314+0000: [ParNew: 272640K->17461K(306688K), 0.0270170 secs] 272640K->17461K(1014528K), 0.0271330 secs] [Times: user=0.06 sys=0.02, real=0.02 secs] | |
2015-10-26T19:16:59.342+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 20296K(1014528K), 0.0054150 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] | |
2015-10-26T19:16:59.381+0000: [CMS-concurrent-mark: 0.032/0.034 secs] [Times: user=0.08 sys=0.00, real=0.03 secs] | |
2015-10-26T19:16:59.383+0000: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-26T19:17:00.891+0000: [CMS-concurrent-abortable-preclean: 1.155/1.507 secs] [Times: user=3.12 sys=0.51, real=1.51 secs] | |
2015-10-26T19:17:00.891+0000: [GC[YG occupancy: 164870 K (306688 K)]2015-10-26T19:17:00.891+0000: [Rescan (parallel) , 0.0125550 secs]2015-10-26T19:17:00.904+0000: [weak refs processing, 0.0000330 secs]2015-10-26T19:17:00.904+0000: [class unloading, 0.0021010 secs]2015-10-26T19:17:00.906+0000: [scrub symbol table, 0.0032390 secs]2015-10-26T19:17:00.909+0000: [scrub string table, 0.0002660 secs] [1 CMS-remark: 0K(707840K)] 164870K(1014528K), 0.0185210 secs] [Times: user=0.05 sys=0.00, real=0.02 secs] | |
2015-10-26T19:17:00.915+0000: [CMS-concurrent-sweep: 0.005/0.005 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
2015-10-26T19:17:00.939+0000: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.03 sys=0.02, real=0.02 secs] | |
2015-10-26T19:17:21.270+0000: [GC2015-10-26T19:17:21.270+0000: [ParNew: 290101K->33145K(306688K), 0.0699540 secs] 290101K->55311K(1014528K), 0.0700280 secs] [Times: user=0.17 sys=0.06, real=0.07 secs] | |
2015-10-26T19:17:22.117+0000: [GC [1 CMS-initial-mark: 22165K(707840K)] 124606K(1014528K), 0.0196040 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] | |
2015-10-26T19:17:22.184+0000: [CMS-concurrent-mark: 0.045/0.047 secs] [Times: user=0.11 sys=0.00, real=0.04 secs] | |
2015-10-26T19:17:22.207+0000: [CMS-concurrent-preclean: 0.018/0.023 secs] [Times: user=0.04 sys=0.01, real=0.03 secs] | |
2015-10-26T19:17:26.572+0000: [GC2015-10-26T19:17:26.572+0000: [ParNew: 305785K->22286K(306688K), 0.0409830 secs] 327951K->64468K(1014528K), 0.0410630 secs] [Times: user=0.08 sys=0.04, real=0.04 secs] | |
CMS: abort preclean due to time 2015-10-26T19:17:27.268+0000: [CMS-concurrent-abortable-preclean: 3.063/5.061 secs] [Times: user=7.17 sys=0.78, real=5.06 secs] | |
2015-10-26T19:17:27.268+0000: [GC[YG occupancy: 90915 K (306688 K)]2015-10-26T19:17:27.268+0000: [Rescan (parallel) , 0.0097820 secs]2015-10-26T19:17:27.278+0000: [weak refs processing, 0.0000870 secs]2015-10-26T19:17:27.278+0000: [class unloading, 0.0109740 secs]2015-10-26T19:17:27.289+0000: [scrub symbol table, 0.0065190 secs]2015-10-26T19:17:27.296+0000: [scrub string table, 0.0004900 secs] [1 CMS-remark: 42182K(707840K)] 133097K(1014528K), 0.0283410 secs] [Times: user=0.06 sys=0.00, real=0.03 secs] | |
2015-10-26T19:17:27.311+0000: [CMS-concurrent-sweep: 0.014/0.014 secs] [Times: user=0.04 sys=0.01, real=0.01 secs] | |
2015-10-26T19:17:27.314+0000: [CMS-concurrent-reset: 0.003/0.003 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: GZIP | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
Oct 26, 2015 7:17:26 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 133,966 | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 129B for [$xmlns, dcterms] BINARY: 1 values, 36B raw, 54B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 141B for [$xmlns, media] BINARY: 1 values, 40B raw, 58B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 144B for [$xmlns, ngc] BINARY: 1 values, 41B raw, 59B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 169B for [$xmlns, pl] BINARY: 1 values, 49B raw, 67B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 187B for [$xmlns, pla] BINARY: 1 values, 55B raw, 73B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 194B for [$xmlns, plfile] BINARY: 1 values, 58B raw, 74B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 182B for [$xmlns, plmedia] BINARY: 1 values, 54B raw, 70B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 190B for [$xmlns, plrelease] BINARY: 1 values, 56B raw, 74B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 537B for [entries, bag, array_element, description] BINARY: 100 values, 109B raw, 130B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 98 entries, 19,996B raw, 98B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 993B for [entries, bag, array_element, id] BINARY: 100 values, 6,716B raw, 839B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 161B for [entries, bag, array_element, mediaavailableDate] INT64: 100 values, 96B raw, 117B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 49 entries, 392B raw, 49B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 147B for [entries, bag, array_element, mediacategories, bag, array_element, medialabel] BINARY: 190 values, 117B raw, 108B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 7 entries, 86B raw, 7B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 257B for [entries, bag, array_element, mediacategories, bag, array_element, medianame] BINARY: 190 values, 178B raw, 159B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 24 entries, 822B raw, 24B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [entries, bag, array_element, mediacategories, bag, array_element, mediascheme] BINARY: 190 values, 79B raw, 84B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 22B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 207B for [entries, bag, array_element, mediacontent, bag, array_element, plfileduration] DOUBLE: 200 values, 189B raw, 163B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 29 entries, 232B raw, 29B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediacopyright] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediacopyrightUrl] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 59B for [entries, bag, array_element, mediacountries, bag, array_element] BINARY: 100 values, 17B raw, 36B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [entries, bag, array_element, mediacredits, bag, array_element, mediarole] BINARY: 181 values, 80B raw, 89B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 13B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 94B for [entries, bag, array_element, mediacredits, bag, array_element, mediascheme] BINARY: 181 values, 61B raw, 67B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 248B for [entries, bag, array_element, mediacredits, bag, array_element, mediavalue] BINARY: 181 values, 198B raw, 198B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 43 entries, 757B raw, 43B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 67B for [entries, bag, array_element, mediaexcludeCountries] BOOLEAN: 100 values, 29B raw, 39B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 148B for [entries, bag, array_element, mediaexpirationDate] INT64: 100 values, 83B raw, 104B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 21 entries, 168B raw, 21B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,806B for [entries, bag, array_element, mediakeywords] BINARY: 100 values, 4,910B raw, 1,677B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [entries, bag, array_element, mediaratings, bag, array_element, rating] BINARY: 100 values, 32B raw, 51B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 18B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 83B for [entries, bag, array_element, mediaratings, bag, array_element, scheme] BINARY: 100 values, 20B raw, 37B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 14B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 75B for [entries, bag, array_element, mediaratings, bag, array_element, subRatings, bag, array_element] BINARY: 100 values, 29B raw, 46B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediatext] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [entries, bag, array_element, mediathumbnails, bag, array_element, plfileduration] DOUBLE: 100 values, 20B raw, 37B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 8B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 74B for [entries, bag, array_element, ngccontentAdType] BINARY: 100 values, 22B raw, 41B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 15B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 163B for [entries, bag, array_element, ngcepisodeNumber] INT64: 100 values, 96B raw, 119B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 30 entries, 240B raw, 30B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 107B for [entries, bag, array_element, ngcnetwork, bag, array_element] BINARY: 100 values, 35B raw, 54B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 35B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 120B for [entries, bag, array_element, ngcseasonNumber] INT64: 100 values, 57B raw, 77B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 6 entries, 48B raw, 6B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 707B for [entries, bag, array_element, ngcuID] BINARY: 100 values, 2,711B raw, 639B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 86B for [entries, bag, array_element, ngcvideoType] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 16B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 177B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediaendTime] DOUBLE: 569 values, 207B raw, 133B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 46 entries, 368B raw, 46B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 797B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediastartTime] DOUBLE: 569 values, 809B raw, 753B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 411 entries, 3,288B raw, 411B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 113B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediathumbnailUrl] BINARY: 569 values, 161B raw, 85B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 125B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediatitle] BINARY: 569 values, 172B raw, 95B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 10B raw, 2B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 68B for [entries, bag, array_element, plmediaprovider] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 7B raw, 1B comp} | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,358B for [entries, bag, array_element, title] BINARY: 100 values, 2,271B raw, 1,300B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [entryCount] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [itemsPerPage] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [startIndex] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Oct 26, 2015 7:17:27 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 110B for [title] BINARY: 1 values, 29B raw, 47B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE] | |
Container: container_1444274555723_0062_02_000002 on ip-10-67-169-247.ec2.internal_8041 | |
========================================================================================= | |
LogType:stderr | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:9328 | |
Log Contents: | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/mnt/yarn/usercache/hadoop/filecache/113/spark-assembly-1.5.0-hadoop2.6.0-amzn-1.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
15/10/26 19:17:40 INFO executor.CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/10/26 19:17:41 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:41 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:41 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:42 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:42 INFO Remoting: Starting remoting | |
15/10/26 19:17:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:36622] | |
15/10/26 19:17:42 INFO util.Utils: Successfully started service 'driverPropsFetcher' on port 36622. | |
15/10/26 19:17:42 INFO spark.SecurityManager: Changing view acls to: yarn,hadoop | |
15/10/26 19:17:42 INFO spark.SecurityManager: Changing modify acls to: yarn,hadoop | |
15/10/26 19:17:42 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hadoop); users with modify permissions: Set(yarn, hadoop) | |
15/10/26 19:17:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:17:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:17:42 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:17:42 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/10/26 19:17:42 INFO Remoting: Starting remoting | |
15/10/26 19:17:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:45900] | |
15/10/26 19:17:43 INFO util.Utils: Successfully started service 'sparkExecutor' on port 45900. | |
15/10/26 19:17:43 INFO storage.DiskBlockManager: Created local directory at /mnt/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-e23dbd95-5935-42c0-8057-f3eb9f502201 | |
15/10/26 19:17:43 INFO storage.DiskBlockManager: Created local directory at /mnt1/yarn/usercache/hadoop/appcache/application_1444274555723_0062/blockmgr-5d4242c5-c404-4565-b6ff-beb2c263115b | |
15/10/26 19:17:43 INFO storage.MemoryStore: MemoryStore started with capacity 535.0 MB | |
15/10/26 19:17:43 INFO executor.CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://[email protected]:48914/user/CoarseGrainedScheduler | |
15/10/26 19:17:43 INFO executor.CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/10/26 19:17:43 INFO executor.Executor: Starting executor ID 1 on host ip-10-67-169-247.ec2.internal | |
15/10/26 19:17:43 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51150. | |
15/10/26 19:17:43 INFO netty.NettyBlockTransferService: Server created on 51150 | |
15/10/26 19:17:43 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/10/26 19:17:43 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/10/26 19:17:43 INFO storage.BlockManager: Registering executor with local external shuffle service. | |
15/10/26 19:18:08 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1 | |
15/10/26 19:18:08 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1) | |
15/10/26 19:18:08 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 0 | |
15/10/26 19:18:08 INFO storage.MemoryStore: ensureFreeSpace(47141) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:08 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 46.0 KB, free 535.0 MB) | |
15/10/26 19:18:08 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 295 ms | |
15/10/26 19:18:09 INFO storage.MemoryStore: ensureFreeSpace(135712) called with curMem=47141, maxMem=560993402 | |
15/10/26 19:18:09 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 132.5 KB, free 534.8 MB) | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/10/26 19:18:09 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/10/26 19:18:09 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1444274560440 | |
15/10/26 19:18:09 INFO metrics.MetricsSaver: Created MetricsSaver j-2US4HNPLS1SJO:i-021cded6:CoarseGrainedExecutorBackend:07540 period:60 /mnt/var/em/raw/i-021cded6_20151026_CoarseGrainedExecutorBackend_07540_raw.bin | |
15/10/26 19:18:10 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261918_0000_m_000001_1' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.original/_temporary/0/task_201510261918_0000_m_000001 | |
15/10/26 19:18:10 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261918_0000_m_000001_1: Committed | |
15/10/26 19:18:10 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 1884 bytes result sent to driver | |
15/10/26 19:18:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 2 | |
15/10/26 19:18:10 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 2) | |
15/10/26 19:18:10 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 1 | |
15/10/26 19:18:10 INFO storage.MemoryStore: ensureFreeSpace(2070) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:10 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.0 KB, free 535.0 MB) | |
15/10/26 19:18:10 INFO broadcast.TorrentBroadcast: Reading broadcast variable 1 took 14 ms | |
15/10/26 19:18:10 INFO storage.MemoryStore: ensureFreeSpace(3776) called with curMem=2070, maxMem=560993402 | |
15/10/26 19:18:10 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 535.0 MB) | |
15/10/26 19:18:10 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 2). 1615 bytes result sent to driver | |
15/10/26 19:18:14 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 5 | |
15/10/26 19:18:14 INFO executor.Executor: Running task 1.0 in stage 2.0 (TID 5) | |
15/10/26 19:18:14 INFO broadcast.TorrentBroadcast: Started reading broadcast variable 3 | |
15/10/26 19:18:14 INFO storage.MemoryStore: ensureFreeSpace(29341) called with curMem=0, maxMem=560993402 | |
15/10/26 19:18:14 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 28.7 KB, free 535.0 MB) | |
15/10/26 19:18:14 INFO broadcast.TorrentBroadcast: Reading broadcast variable 3 took 9 ms | |
15/10/26 19:18:14 INFO storage.MemoryStore: ensureFreeSpace(82904) called with curMem=29341, maxMem=560993402 | |
15/10/26 19:18:14 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 81.0 KB, free 534.9 MB) | |
15/10/26 19:18:14 INFO datasources.DefaultWriterContainer: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter | |
15/10/26 19:18:15 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library | |
15/10/26 19:18:15 INFO compress.CodecPool: Got brand-new compressor [.gz] | |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
15/10/26 19:18:16 INFO output.FileOutputCommitter: Saved output of task 'attempt_201510261918_0002_m_000001_0' to hdfs://ip-10-65-200-150.ec2.internal:8020/tmp/ngcngw-analytics.parquet/_temporary/0/task_201510261918_0002_m_000001 | |
15/10/26 19:18:16 INFO mapred.SparkHadoopMapRedUtil: attempt_201510261918_0002_m_000001_0: Committed | |
15/10/26 19:18:16 INFO executor.Executor: Finished task 1.0 in stage 2.0 (TID 5). 935 bytes result sent to driver | |
15/10/26 19:18:18 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown | |
15/10/26 19:18:18 INFO storage.MemoryStore: MemoryStore cleared | |
15/10/26 19:18:18 INFO storage.BlockManager: BlockManager stopped | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/10/26 19:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/10/26 19:18:18 INFO util.ShutdownHookManager: Shutdown hook called | |
LogType:stdout | |
Log Upload Time:26-Oct-2015 19:18:19 | |
LogLength:15622 | |
Log Contents: | |
2015-10-26T19:17:42.280+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 267413K(1014528K), 0.0496250 secs] [Times: user=0.05 sys=0.00, real=0.05 secs] | |
2015-10-26T19:17:42.353+0000: [CMS-concurrent-mark: 0.023/0.024 secs] [Times: user=0.03 sys=0.01, real=0.02 secs] | |
2015-10-26T19:17:42.359+0000: [GC2015-10-26T19:17:42.359+0000: [ParNew: 272640K->17645K(306688K), 0.0271940 secs] 272640K->17645K(1014528K), 0.0272670 secs] [Times: user=0.06 sys=0.01, real=0.03 secs] | |
2015-10-26T19:17:42.386+0000: [CMS-concurrent-preclean: 0.005/0.033 secs] [Times: user=0.07 sys=0.01, real=0.04 secs] | |
2015-10-26T19:17:42.387+0000: [GC[YG occupancy: 17645 K (306688 K)]2015-10-26T19:17:42.387+0000: [Rescan (parallel) , 0.0081570 secs]2015-10-26T19:17:42.395+0000: [weak refs processing, 0.0000300 secs]2015-10-26T19:17:42.395+0000: [class unloading, 0.0008230 secs]2015-10-26T19:17:42.396+0000: [scrub symbol table, 0.0025190 secs]2015-10-26T19:17:42.398+0000: [scrub string table, 0.0002350 secs] [1 CMS-remark: 0K(707840K)] 17645K(1014528K), 0.0120530 secs] [Times: user=0.03 sys=0.00, real=0.01 secs] | |
2015-10-26T19:17:42.403+0000: [CMS-concurrent-sweep: 0.004/0.005 secs] [Times: user=0.02 sys=0.01, real=0.00 secs] | |
2015-10-26T19:17:42.441+0000: [CMS-concurrent-reset: 0.038/0.038 secs] [Times: user=0.06 sys=0.03, real=0.04 secs] | |
2015-10-26T19:17:45.798+0000: [GC [1 CMS-initial-mark: 0K(707840K)] 249276K(1014528K), 0.0481070 secs] [Times: user=0.05 sys=0.00, real=0.04 secs] | |
2015-10-26T19:17:45.879+0000: [CMS-concurrent-mark: 0.033/0.033 secs] [Times: user=0.03 sys=0.00, real=0.03 secs] | |
2015-10-26T19:17:45.907+0000: [CMS-concurrent-preclean: 0.023/0.028 secs] [Times: user=0.02 sys=0.01, real=0.03 secs] | |
CMS: abort preclean due to time 2015-10-26T19:17:51.018+0000: [CMS-concurrent-abortable-preclean: 1.706/5.110 secs] [Times: user=1.74 sys=0.00, real=5.11 secs] | |
2015-10-26T19:17:51.018+0000: [GC[YG occupancy: 249276 K (306688 K)]2015-10-26T19:17:51.018+0000: [Rescan (parallel) , 0.0723200 secs]2015-10-26T19:17:51.090+0000: [weak refs processing, 0.0000410 secs]2015-10-26T19:17:51.090+0000: [class unloading, 0.0020710 secs]2015-10-26T19:17:51.092+0000: [scrub symbol table, 0.0045250 secs]2015-10-26T19:17:51.097+0000: [scrub string table, 0.0003570 secs] [1 CMS-remark: 0K(707840K)] 249276K(1014528K), 0.0797430 secs] [Times: user=0.21 sys=0.01, real=0.08 secs] | |
2015-10-26T19:17:51.105+0000: [CMS-concurrent-sweep: 0.007/0.007 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] | |
2015-10-26T19:17:51.108+0000: [CMS-concurrent-reset: 0.003/0.003 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] | |
2015-10-26T19:18:08.846+0000: [GC2015-10-26T19:18:08.846+0000: [ParNew: 290285K->30500K(306688K), 0.0754230 secs] 290285K->52777K(1014528K), 0.0755050 secs] [Times: user=0.18 sys=0.06, real=0.08 secs] | |
2015-10-26T19:18:15.088+0000: [GC [1 CMS-initial-mark: 22276K(707840K)] 324700K(1014528K), 0.0625580 secs] [Times: user=0.07 sys=0.00, real=0.06 secs] | |
2015-10-26T19:18:15.177+0000: [GC2015-10-26T19:18:15.177+0000: [ParNew: 303140K->25111K(306688K), 0.0407170 secs] 325417K->67198K(1014528K), 0.0407970 secs] [Times: user=0.10 sys=0.01, real=0.04 secs] | |
2015-10-26T19:18:15.264+0000: [CMS-concurrent-mark: 0.055/0.113 secs] [Times: user=0.28 sys=0.03, real=0.11 secs] | |
2015-10-26T19:18:15.309+0000: [CMS-concurrent-preclean: 0.032/0.046 secs] [Times: user=0.06 sys=0.02, real=0.05 secs] | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.codec.CodecConfig: Compression: GZIP | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet block size to 134217728 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet page size to 1048576 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Parquet dictionary page size to 1048576 | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Dictionary is on | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Validation is off | |
Oct 26, 2015 7:18:14 PM INFO: org.apache.parquet.hadoop.ParquetOutputFormat: Writer version is: PARQUET_1_0 | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 133,966 | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 129B for [$xmlns, dcterms] BINARY: 1 values, 36B raw, 54B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 141B for [$xmlns, media] BINARY: 1 values, 40B raw, 58B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 144B for [$xmlns, ngc] BINARY: 1 values, 41B raw, 59B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 169B for [$xmlns, pl] BINARY: 1 values, 49B raw, 67B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 187B for [$xmlns, pla] BINARY: 1 values, 55B raw, 73B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 194B for [$xmlns, plfile] BINARY: 1 values, 58B raw, 74B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 182B for [$xmlns, plmedia] BINARY: 1 values, 54B raw, 70B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 190B for [$xmlns, plrelease] BINARY: 1 values, 56B raw, 74B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 537B for [entries, bag, array_element, description] BINARY: 100 values, 109B raw, 130B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 98 entries, 19,996B raw, 98B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 993B for [entries, bag, array_element, id] BINARY: 100 values, 6,716B raw, 839B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 161B for [entries, bag, array_element, mediaavailableDate] INT64: 100 values, 96B raw, 117B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 49 entries, 392B raw, 49B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 147B for [entries, bag, array_element, mediacategories, bag, array_element, medialabel] BINARY: 190 values, 117B raw, 108B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 7 entries, 86B raw, 7B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 257B for [entries, bag, array_element, mediacategories, bag, array_element, medianame] BINARY: 190 values, 178B raw, 159B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 24 entries, 822B raw, 24B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 126B for [entries, bag, array_element, mediacategories, bag, array_element, mediascheme] BINARY: 190 values, 79B raw, 84B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 22B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 207B for [entries, bag, array_element, mediacontent, bag, array_element, plfileduration] DOUBLE: 200 values, 189B raw, 163B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 29 entries, 232B raw, 29B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediacopyright] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediacopyrightUrl] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 59B for [entries, bag, array_element, mediacountries, bag, array_element] BINARY: 100 values, 17B raw, 36B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 122B for [entries, bag, array_element, mediacredits, bag, array_element, mediarole] BINARY: 181 values, 80B raw, 89B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 13B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 94B for [entries, bag, array_element, mediacredits, bag, array_element, mediascheme] BINARY: 181 values, 61B raw, 67B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 248B for [entries, bag, array_element, mediacredits, bag, array_element, mediavalue] BINARY: 181 values, 198B raw, 198B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 43 entries, 757B raw, 43B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 67B for [entries, bag, array_element, mediaexcludeCountries] BOOLEAN: 100 values, 29B raw, 39B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 148B for [entries, bag, array_element, mediaexpirationDate] INT64: 100 values, 83B raw, 104B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 21 entries, 168B raw, 21B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,806B for [entries, bag, array_element, mediakeywords] BINARY: 100 values, 4,910B raw, 1,677B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 87B for [entries, bag, array_element, mediaratings, bag, array_element, rating] BINARY: 100 values, 32B raw, 51B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 18B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 83B for [entries, bag, array_element, mediaratings, bag, array_element, scheme] BINARY: 100 values, 20B raw, 37B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 14B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 75B for [entries, bag, array_element, mediaratings, bag, array_element, subRatings, bag, array_element] BINARY: 100 values, 29B raw, 46B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 62B for [entries, bag, array_element, mediatext] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for [entries, bag, array_element, mediathumbnails, bag, array_element, plfileduration] DOUBLE: 100 values, 20B raw, 37B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 8B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 74B for [entries, bag, array_element, ngccontentAdType] BINARY: 100 values, 22B raw, 41B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 15B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 163B for [entries, bag, array_element, ngcepisodeNumber] INT64: 100 values, 96B raw, 119B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 30 entries, 240B raw, 30B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 107B for [entries, bag, array_element, ngcnetwork, bag, array_element] BINARY: 100 values, 35B raw, 54B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 35B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 120B for [entries, bag, array_element, ngcseasonNumber] INT64: 100 values, 57B raw, 77B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 6 entries, 48B raw, 6B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 707B for [entries, bag, array_element, ngcuID] BINARY: 100 values, 2,711B raw, 639B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 86B for [entries, bag, array_element, ngcvideoType] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 16B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 177B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediaendTime] DOUBLE: 569 values, 207B raw, 133B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 46 entries, 368B raw, 46B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 797B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediastartTime] DOUBLE: 569 values, 809B raw, 753B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 411 entries, 3,288B raw, 411B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 113B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediathumbnailUrl] BINARY: 569 values, 161B raw, 85B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 4B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 125B for [entries, bag, array_element, plmediachapters, bag, array_element, plmediatitle] BINARY: 569 values, 172B raw, 95B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 2 entries, 10B raw, 2B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 68B for [entries, bag, array_element, plmediaprovider] BINARY: 100 values, 19B raw, 36B comp, 1 pages, encodings: [PLAIN_DICTIONARY, RLE], dic { 1 entries, 7B raw, 1B comp} | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 1,358B for [entries, bag, array_element, title] BINARY: 100 values, 2,271B raw, 1,300B comp, 1 pages, encodings: [PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [entryCount] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [itemsPerPage] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 70B for [startIndex] INT64: 1 values, 14B raw, 29B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] | |
Oct 26, 2015 7:18:15 PM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 110B for [title] BINARY: 1 values, 29B raw, 47B comp, 1 pages, encodings: [BIT_PACKED, PLAIN, RLE] |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment