You would use Apache Spark over Hadoop MapReduce when you need faster data processing and you're working with iterative algorithms or real-time data streams. For example, if you're building a recommendation engine that requires multiple iterations over the same dataset, Spark's in-memory processing can significantly reduce the time needed compared to the disk-based approach of Hadoop MapReduce.
Hadoop MapReduce is often more suitable when you have large-scale batch jobs that do not require fast processing or real-time data analysis. For instance, if you have a very large dataset stored in HDFS and you need to run a complex batch job that processes the data over days or weeks, MapReduce can be a good fit due to its fault tolerance and requirement for less memory compared to Spark.