5 Data Formats You Can’t Avoid As A Data Engineer By Isaac Omolayo | December, 2022 | Jobs Vox


Photo by Marvin Meyer on Unsplash

As simple as it sounds, one of the things that will make you more productive as a data engineer is using the right data file format for your data engineering projects. Using the right file format means that your data engineering is off to a good start, and you can build on top of this great start.

As you grow and your business use cases continue to evolve, you will reach a particular level in your engineering phase where you will consider optimizing your ETL processes, your streaming applications, and your data workloads in general. will do. This optimization can be for storage cost, data compatibility with other platforms, security, ease of use, and other needs.

You’ll always want to have a file format that suits your business use cases and how the solution evolves. There are different questions you should ask when planning your data engineering project. One of the most important is what kind of data format you should use, which depends on the nature of the project. In this post, I’ll show you the five hottest data formats you need to master and know when to use them.

image by author


JSON data format is the most widely used format in the technical and developmental space and data engineering world. You will use JSON to send information across various systems, which can be: servers, databases, applications and software to send configuration information – you may also want to provide data for the analytics team to analyze in JSON format . In any case, some of the features of JSON data format that will be helpful for your projects are mentioned below:

Sample JSON Data Format

"company": "Cheers",
"name": "Bob",
"price": 200
"company": "Brother Tyrells",
"name": "Frank",
"price": 400
"company": "Orthom Orange",
"name": "Allie",
"price": 300
  • JSON data format is faster and more performant because it consumes less memory space, which is very suitable for large object graphs or systems.
  • JSON is a lightweight data format that allows you to send data from one system to another for server and application configuration, REST API authentication, and data analytics workloads.
  • Web applications often use JSON as a standard format for sending and receiving data. In addition, its serialization protocol for storing data makes it suitable for batch processing and analytical workloads for real-time data streaming


One of the data file formats that helps to bridge the gap between software development, data science and data engineering is XML. With XML, you can address business use cases that provide data in CSV or any other data format, and you distribute the transformed data in XML format. This is important because downstream applications can use the data in web applications and software. Most legacy systems that have been available for decades still use XML for data sharing and management. Your work may involve moving some data from a legacy system to a modern database for better performance. Some of the features of the XML data format include the following:

  • XML is the core of web applications; Very easy to read XML data and display it in mobile and user interface
  • The XML data format is software and hardware-independent; It is very easy to share data across different systems with different hardware and software configurations. A system with any programming language can read and process XML documents.
  • XML is flexible and extensible; It allows you to store data regardless of how it will be presented.

sample xml data format

<company>Brother Tyrells</company>
<company>Orthom Orange</company>


A CSV is a comma separated values ​​file that allows data to be saved in a spreadsheet or tabular format. The CSV data format is a standard format used by many applications and in data engineering: you can quickly move data from different platforms to different systems for different purposes. Many applications are compatible with CSV. Some nice features of CSV are:

  • CSV is easy to work with, it’s human readable, and you don’t need a dedicated system to read and write CSV files.
  • The CSV format is composed of a simple schema, allowing for ease of reading and writing data across multiple compatible systems and databases.
  • The CSV format is compact compared to formats such as XML and JSON; It is well suited for analytical workloads where tags and nested structures are absent
price_info.csv (image by author)

wooden ceiling

Parquet is an open-source, column-oriented data format for efficient storage and retrieval. Its design provides efficient data compression and encoding schemes with enhanced performance to handle complex, heavy data. Parquet is typically used for analytics (OLAP) use cases, typically in conjunction with traditional OLTP databases. Some important features of wooden ceilings are:

  • Faster queries that can retrieve specific column values ​​without reading the full row data
  • Highly efficient column-wise compression, which lowers the overall cost of storage and improves performance
  • High compatibility with OLAP, which saves storage space and helps speed up analytics queries
Price Dataframe (image by author)

Below is the spark code to save a spark dataframe as a Parquet data format.

price_information.write \

databricks line of code to list parquet files in a directory /FileStore/tables/price/parquet/ is given below.

%fs ls "/FileStore/tables/price/parquet/"
Result of Parquet files listed in directory (image by author)


Delta Table is a recently developed open-format storage layer that provides reliability, performance and security across your data lake – it performs well for both streaming and batch operations. With Delta Lake, you can design a single home for structured, semi-structured and unstructured data while maintaining performance and consistency across all downstream systems. The Delta Table is well suited for cost-effective, highly efficient and scalable lakehouse designs.

Some important features of delta table are:

  • Delta tables ensure high-quality, reliable data storage
  • Delta table is open and allows secure data sharing
  • One essential solution that Delta Table brought into the mix is ​​lightning-fast performance: fast reads and writes
  • Delta table protection and integrated batch and large scale streaming possible
Price Dataframe (image by author)

Below is the spark code to save spark dataframe to delta table.

price_information.write \

databricks line of code to list delta data format files in a directory /FileStore/tables/price/delta/ is given below.

%fs ls "/FileStore/tables/price/delta/"
Result of delta files listed in directory (image by author)


In the end, what matters is building an efficient, scalable, stable system: using the right data file format is just the start of the game. You may want to use either data format for your data engineering project, but there may be situations where you will need to use more than one data format to meet business needs. Data formats such as the delta table are evolving; More features have been added which can provide all your business needs. Try things out and confirm which data format works better for your development. With this, you will continue to enjoy the goodness that these file formats can offer your business now and in the future.

further reading

[1] Delta Lake – Getting Started (2022)

Thank you for reading.


Source link

Implement tags. Simulate a mobile device using Chrome Dev Tools Device Mode. Scroll page to activate.