Spark Interview Question5: What is caching in Spark?

Data Cat
3 min readMar 5, 2024

Date: March 5th, 2024

Hi everyone! I started posting contents about Spark interview questions for SWE/Data Engineers, mainly for Spark Optimization related questions. I aim to continuously write about ten posts about Spark optimization. After the series of these posts, you will ace technical interviews related to Spark! Although this post aims for helping technical interview rounds, any Spark users will find this series insightful and help your learning!

“Disclaimer: The views and opinions expressed in this blog post are solely my own and do not reflect those of any entity with which I have been, am now, or will be affiliated. This content was written during a period in which the author was not affiliated with nor belong to any organization that could influence my perspectives. As such, these are my personal insights, shared without any external bias or influence.”

What is Caching?

If you’re working with DataFrames or Datasets, consider caching them in memory after applying initial transformations. This can help reduce the need for re-serialization during iterative processing. Once you created a caching of you dataframe, operations you perform on the cached DataFrame, such as select, filter, or groupBy, will benefit from the cached data.

--

--