Hadoop代写:CS1003-Data-Processing-with-MapReduce

Key Competency

  • using MapReduce to process data

Necessary Skills

  • expressing an algorithm in the MapReduce style
  • choosing appropriate classes and methods from the MapReduce API
  • testing and debugging
  • writing clear, tidy, consistent and understandable code

Requirements

The practical involves manipulating fairly large data files using the Hadoop implementation of MapReduce.

When working in the lab, it is highly recommended that you copy a small subset of these files to your local machine under /cs/scratch/username , and use them to develop and test your program. Do not use the input files directly from studres or your home folder as this will exhaust the network.

Your program should perform the following operations:

  • Obtain the absolute paths of the input and output directories from the user. The input must be read in from files in the input directory, and must be written to files in the output directory.
  • Find all character level or word level n-grams for text fragments contained within files in the given input directory written in a given language, depending on the user’s input
  • Print the list of n-grams and their frequency to a file in the output directory, in alphabetical order

Your program should deal gracefully with possible errors such as the web resource file being unavailable, or the response containing data in an unexpected format. The source code for your program should follow common style guidelines, including:

  1. formatting code neatly
  2. consistency in name formats for methods, fields, variables
  3. avoiding embedded “magic numbers” and string literals
  4. minimising code duplication
  5. avoiding long methods
  6. using comments and informative method/variable names to make the code clear to the reader

Deliverables

Hand in via MMS, a zip file containing the following:

Your Java source files

  • A brief report (maximum 3 pages) explaining the decisions you made, how you tested your program, and how you solved any difficulties that you encountered. Include instructions on how to run your program and any dependencies that need resolving. You can use any software you like to write your report, but your submitted version must be in PDF format.
  • Also within your report:
  • Highlight one piece of feedback from your previous submissions, and explain how you used it to improve this submission
  • If you had to carry out a similar large-scale data processing task in future, would you choose Hadoop or basic file manipulation as you did in earlier practicals? Write a brief comparison of the two methods, explaining the advantages and disadvantages of each, and justify your choice.

Extensions

If you wish to experiment further, you could try any or all of the following:

  1. Give the user an option to order the n-grams by occurrence frequency. Hint: You could use the ‘Most Popular Word’ example on studres as a starting point.
  2. Perform additional queries on the data, such as:
    1. a. Count all text fragments containing a given string by the user in a given language
    2. b. Find which word occurs in the highest number of different languages
    3. c. Any other additional statistics/queries about the data you could generate using Hadoop

Hints

Here is one possible sequence for developing your solution. It is recommended that you make a new method or class at each stage, so that you can easily go back to a previous stage if you run into problems. Please use demonstrator support in the labs whenever possible.

  1. You will need to examine the structure of the data files, to see how the text fragments and language specifications are represented.
  2. Tackle one problem at a time, beginning with selecting all text in a particular language, which requires a mapper class but no reducer. Initially, you can use a fixed search language String in your mapper class for testing purposes.
  3. To select only text in the required language, the difficulty is that the language is recorded in a different line from the text fragment, so will be processed in a different call to map. This can be solved using a field in the mapper object to record the most recently encountered language. The map method can then either update this field, if the current line contains a language, or check the language field value, if the current line contains a text fragment.
  4. To test, first make a new directory and copy 10 or 20 of the data files into it—the full data set will take inconveniently long to run.
  5. Once this works, refine your solution so that the search language is passed as a parameter. Recall that you can pass a parameter value to a mapper or reducer by calling the method.
  6. To return text in the specified language as n-grams, you will need to also pass the user’s specified n-gram type and size as parameters, using the same methods in Step 5. Following this, it is recommended that you reuse your n-gram creation code from Practical 2 to split each String into its corresponding n-grams. Remember that, unlike Practical 2, you do not need to represent word boundaries with an underscore.
  7. In order to output the n-gram frequencies alongside the n-grams themselves, you will need to implement a reducer class that groups duplicate n-grams and sums their total frequency. For a reminder of how to do this, review the ‘Word Count’ example on studres.
  8. For ordering your output n-grams, recall that sorting order is specified with the setOutputKeyComparatorClass method of the JobConf class.