1.YARN - Yet Another Resource Negotiator (assigns CPU, memory, and storage to applications running on a hadoop cluster. The first generation of Hadoop could only run MapReduce applications. YARN enables other application frameworks (like Spark) to run on Hadoop as well which opens up lots & lots of possibilities.
2. HDFS - Hadoop Distributed File System (HDFS) is a file system that spans all the nodes in a Hadoop cluster for data storage. It links together the file systems on many local nodes to make them into one big file system.
3. Map Reduce - It is a programming model used for large scale data processing in Hadoop.
For more information on Map Reduce follow this link.
4. Hadoop Common – contains libraries and utilities needed by other Hadoop modules.