CS61C Summer 2012 Lab 12: MapReduce

Setup

You may work on this lab in partners. It will be especially helpful if one of the partners has some experience in coding Java.

The MapReduce programming framework is primarily designed to be used on large distributed clusters. However, large, distributed jobs are harder to debug. For this lab, we’ll be using Hadoop -- an open source platform which implements MapReduce programming -- in “local mode”, where your Map and Reduce routines are run entirely within one process.

Exercises

Exercise 0: Running Word Count

Copy the template files for this lab from ~cs61c/labs/12.


The resulting directory will contain two files:
  1. WordCount.java — source code for Hadoop WordCount
  2. Makefile

Run 'make' to compile and package 'wc.jar'. Then, run the word count example:

$ hadoop jar wc.jar WordCount ~cs61c/data/billOfRights.txt.seq wc-out

This will run word count over a sample input file (the US Bill of Rights.)  Your output should be visible in wc-out/part-r-00000. If you had multiple reduces, then the output would be split across part-r-[id. num], where Reducer "id. num" outputs to the corresponding file. The plain-text for the test code is in “ ~cs61c/data/billOfRights.txt”. For the input to your MapReduce job, the map()’s key is a document identifier and the actual text is in the value.

Once you have things working on the small test case, try your code on the larger input file ~cs61c/data/sample.seq (approx. 34 MiB). This file contains the text of one week's worth of newsgroup posts extracted from this corpus. (Don't worry about duplicate posts.) ). Since Hadoop requires the output directory not to exist when a MapReduce job is executed, you'll need to delete the wc-out directory (using the command rm -rf wc-out ) first, or choose a different output directory.

You may notice that the Reduce percentage-complete percentage moves in strange ways. There’s a reason for it. Your code is only the last third of the progress counter. Hadoop treats the distributed shuffle as the first third of the Reduce. The sort is the second third. The actual Reduce code is the last third. Locally, the sort is quick and the shuffle doesn’t happen at all. So don’t be surprised if progress jumps to 66% and then slows.

Exercise 1: Documents-using-a-Word Count

Copy WordCount.java to DocWordCount.java. Rename the class (in the file) from 'WordCount' to 'DocWordCount'.  Modify it to count the number of documents containing each word rather than the number of times each word occurs in the input. Run make to compile your modified version, and then run it on the same inputs as before.

You should only need to modify the code inside the map() function for this part. Each call to map() gets a single document, and each document is passed to exactly one map().

Exercise 2: Full Text Index Creation

Create a new file called 'Index.java' based on WordCount.java. Modify it to output, for each word, a list of the locations (word number in the document and identifier for the document) in which the word occurs. (An identifier for each document is provided as the key to the mapper.) Your output should have lines that look like:

word1	document-id:index,index,...
word1	document-id:index,index,...
word2	document-id:index,index,...
.
.
.

Minor line formatting details don’t matter. You should number words in a document starting with zero. You can assume that there’s just one reducer and hence just one output file.   For each word and document pair, there should be a list of indices the word appears in that document.

For this exercise, you may need to modify map(), reduce() and the type signature for Reduce. You will also need to make a minor edit to main() to tell the framework about the new type signature for Reduce.

Compile and run this MapReduce program on the same inputs as for Exercise 1.

Check off

  1. Show your TA your DocWordCount.java and Index.java
  2. Show your TA the first page of your Index.java's output file when run on sample.seq

We recommend deleting output directories when you have completed the lab, so you don't run out of your 500MB of disk quota .

Further resources:

The Java API documentation is on the web: http://download.oracle.com/javase/6/docs/api/

The classes java.util.HashMap, java.util.HashSet and java.util.ArrayList are particularly likely to be useful to you.

The Hadoop Javadoc is also available: http://hadoop.apache.org/common/docs/r0.20.2/api/index.html

You mostly shouldn’t need this, but it may be handy for org.apache.hadoop.io.Text.