ネットワーク管理者の憂鬱な日常

とある組織でネットワーク管理に携わる管理者の憂鬱な日常を書いてみたりするブログ

Hadoop-0.21.0 インストール(3)

ってことで,以前も走らせたSort Benchmarkを実行.

hadoop@sv1> ./bin/hadoop jar hadoop-mapred-examples-0.21.0.jar randomwriter rand % ./bin/hadoop jar hadoop-mapred-examples-0.21.0.jar sort rand rand-sort
10/09/12 19:55:39 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
Running 30 maps.
Job started: Sun Sep 12 19:55:39 JST 2010
10/09/12 19:55:40 INFO mapreduce.JobSubmitter: number of splits:30
10/09/12 19:55:40 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
10/09/12 19:55:40 INFO mapreduce.Job: Running job: job_201009121952_0001
10/09/12 19:55:41 INFO mapreduce.Job: map 0% reduce 0%
10/09/12 19:58:07 INFO mapreduce.Job: map 3% reduce 0%
10/09/12 19:58:25 INFO mapreduce.Job: map 6% reduce 0%
10/09/12 19:58:43 INFO mapreduce.Job: map 13% reduce 0%
10/09/12 19:58:52 INFO mapreduce.Job: map 16% reduce 0%
10/09/12 19:58:55 INFO mapreduce.Job: map 20% reduce 0%
10/09/12 20:00:06 INFO mapreduce.Job: map 23% reduce 0%
10/09/12 20:00:57 INFO mapreduce.Job: map 30% reduce 0%
10/09/12 20:01:06 INFO mapreduce.Job: map 33% reduce 0%
10/09/12 20:01:50 INFO mapreduce.Job: map 36% reduce 0%
10/09/12 20:02:11 INFO mapreduce.Job: map 40% reduce 0%
10/09/12 20:02:18 INFO mapreduce.Job: map 43% reduce 0%
10/09/12 20:03:24 INFO mapreduce.Job: map 46% reduce 0%
10/09/12 20:03:27 INFO mapreduce.Job: map 50% reduce 0%
10/09/12 20:03:42 INFO mapreduce.Job: map 53% reduce 0%
10/09/12 20:04:18 INFO mapreduce.Job: map 56% reduce 0%
10/09/12 20:04:41 INFO mapreduce.Job: map 60% reduce 0%
10/09/12 20:04:47 INFO mapreduce.Job: map 63% reduce 0%
10/09/12 20:05:48 INFO mapreduce.Job: map 66% reduce 0%
10/09/12 20:05:57 INFO mapreduce.Job: map 70% reduce 0%
10/09/12 20:06:06 INFO mapreduce.Job: map 73% reduce 0%
10/09/12 20:06:36 INFO mapreduce.Job: map 76% reduce 0%
10/09/12 20:07:20 INFO mapreduce.Job: map 80% reduce 0%
10/09/12 20:07:29 INFO mapreduce.Job: map 83% reduce 0%
10/09/12 20:08:33 INFO mapreduce.Job: map 86% reduce 0%
10/09/12 20:08:45 INFO mapreduce.Job: map 93% reduce 0%
10/09/12 20:09:00 INFO mapreduce.Job: map 96% reduce 0%
10/09/12 20:09:12 INFO mapreduce.Job: map 100% reduce 0%
10/09/12 20:09:14 INFO mapreduce.Job: Job complete: job_201009121952_0001
10/09/12 20:09:14 INFO mapreduce.Job: Counters: 16
FileSystemCounters
HDFS_BYTES_READ=2510
HDFS_BYTES_WRITTEN=32318580900
org.apache.hadoop.examples.RandomWriter$Counters
BYTES_WRITTEN=32212453688
RECORDS_WRITTEN=3067212
Job Counters
Total time spent by all maps waiting after reserving slots (ms)=0
Total time spent by all reduces waiting after reserving slots (ms)=0
SLOTS_MILLIS_MAPS=4597209
SLOTS_MILLIS_REDUCES=0
Launched map tasks=33
Map-Reduce Framework
Failed Shuffles=0
GC time elapsed (ms)=10522
Map input records=30
Map output records=3067212
Merged Map outputs=0
Spilled Records=0
SPLIT_RAW_BYTES=2510
Job ended: Sun Sep 12 20:09:14 JST 2010
The job took 814 seconds.

ってな感じで,814秒(13分34秒)で終了.

前回,DataNodeが2つの時は1298秒(21分38秒)だったので,単純に約1.6倍の
性能向上が認められることになる.

DataNode数の増加が,そのままリニアな性能向上に結びつくのかどうか
厳密には分からないが,とりあえずノード数増やすのは有効なのね,と.

機材,どこかから調達できへんかなぁ・・・(苦笑).

スポンサーリンク