“Hundreds of gigabytes of data constitute the low end of Hadoop-scale. Actually Hadoop is built to process “web-scale” data on the order of hundreds of gigabytes to terabytes or petabytes. At this scale, it is likely that the input data set will not even fit on a single computer’s hard drive, much less in memory. So Hadoop includes a distributed file system which breaks up input data and sends fractions of the original data to several machines in your cluster to hold.”