Over this six plus one blog series we covered many different technologies and integrations, as we built our data pipeline. While there are standard patterns we can often use, sometimes we need to be able to flexible and use what is available.
In this part we set up our big data platform, Azure Data Explorer, and create ingestion events to continually ingest our transformed data. Finally, we add context to our data by creating a data model and exporting it to our data lake.
We discuss the importance of proper data management upfront and the need to always be in control of data sharing. We send the appropriate data automatically to our vendor via an Azure Function App.
We added a new uploader method to store data in Azure Blob storage and started to deal with leaky data pipelines by handling failed uploads.
We refactored code for our web server and created a utils module. The utils module handles file saving and ETL methods as background tasks.