Hi everyone!
Given my great curiosity for back-end development with Python, I wanted to combine my seven-year work experience in the IIoT world with my passion for programming, creating a software that could be both useful and give me the possibility to deepen in a practical way what I have studied in the recent months.
As an objective of this project, I kept in mind the phenomenon of "digital transformation", using modular technologies more oriented towards microservices rather than other better known and more articulated frameworks, through which I still managed to achieve what was planned.
Keeping in mind this "docet" with which to develop the project, with little more than 600 lines of code distributed in the various scripts (among which there are also files containing only data structures and not algorithms), I created a software for sampling, historicization, query and data analysis of a finite number of analog measurements in a production environment.
The project is mainly composed of two "modules", each of which has a specific role:
the collector , which has the task of logging the process data obtained from a device in the field ( PLC ) on a local SQLite database ;
the API server, which has the task of managing requests and providing the necessary responses to the client(s).
The main technologies used for the various modules of the project are:
Module n. 1 (collector):
python-snap7 for communication between the PLC device and Python;
schedule for scheduling historicization tasks.
Module n. 2 (API server):
FastAPI for creating REST APIs;
matplotlib for creating measurement plots;
uvicorn as ASGI server.
In both modules, SQLAlchemy is used as the ORM.
The logical structure of the software is represented as follows:
The working logic of project modules is shown below:
Once started, the collector obtains the tags configured for logging from the SQLite database and, based on the various tasks defined thanks to the "schedule" library, each time interval set, their values are saved in the local database.
Each saving produces a log line to inform the user of the successful historicization of the data:
Once started, the API server exposes different endpoints, with which users can interact through "query parameters" to obtain the information they need.
Each configured endpoint returns a response in JSON format (except for one endpoint which returns an image), in such a way as to be easily human-readable, and being a widely used format in the world of web development, it can be easily interpreted by most part of the front-end frameworks and several other tools:
There are five configured endpoints (including the "root" and "docs" endpoint), and they are:
root: the main endpoint doing a redirect on the API documentation;
tags: dedicated to obtaining and configuring new tags for sampling their values;
data: dedicated to obtaining the values of the sampled tags;
chart: dedicated to the generation and display of a historical graph of the selected tag and the related (alarm) setpoints;
docs: dedicated to API documentation, automatically generated by OpenAPI.
It is possible to observe in detail the functionality of the various endpoints below thanks to the interactive documentation.
Figure 1 - Overview of the "tags" endpoint for the "GET" method
Figure 2 - Overview of the "tags" endpoint related to the "GET" method with the application of the "name_like" filter
Figure 3 - Overview of the "tags" endpoint related to the "GET" method with the application of the "name_like" and "description_like" filters
Figure 4 - Overview of the "tags" endpoint related to the "POST" method
Figure 1 - Overview of the "data" endpoint for the "GET" method
Figure 2 - Overview of the "data" endpoint related to the "GET" method with the application of the "period" and "name_like" filters (continues in Figure 3 )
Figure 3 - Overview of the "data" endpoint related to the "GET" method with the application of the "period" and "name_like" filters
Figure 1 - Overview of the "chart" endpoint related to the "GET " method
Figure 2 - Overview of the "chart" endpoint related to the "GET" method with the application of the "tag_name" and "period" filters (continues in Figure 3)
Figure 3 - Overview of the "chart" endpoint related to the "GET" method with the application of the "tag_name" and "period" filters
As mentioned in the introduction, I developed the project using the Python programming language , a simple but powerful language with a wide range of application fields (machine learning, data analysis, web development, data science, scripting...).
The use of typing in the code, combined with the writing of comments and docstrings, has made it clearer and more developer-friendly, as well as making it possible to automatically create API documentation thanks to OpenAPI.
Thanks to the modular structure of the scripts, I had the possibility to write various parts of code only once, thus being able to reuse it in several parts of the project, thus avoiding having "boilerplate" code.
During the development of the project I used a large part of my knowledge already applied in the world of work, especially regarding data management through a database, and regular expressions, with which I was able to carry out most of the validation checks of user input.
With the Continuous Integration approach, I was able to monitor the behavior of the various parts of the application, intervening with criteria where I noticed an anomaly, also thanks to the help of information logging libraries.
Once the anomalies were resolved and tested directly, the changes were synchronized with a repository on GitHub through the use of git .
Thanks to linting tools like ruff, I had the possibility to identify critical errors and warnings during the development of the code, making it more functional and as much PEP 8 compliant as possible.
Figure 1 - Function for validating the time period entered by the user
Figure 2 - "GET" method of the "data" endpoint, using the SQLAlchemy "ORM" approach
Figure 3 - Definition of DTOs (Data Transfer Objects), used by FastAPI and Pydantic
Figure 4 - Using matplotlib to create historical plots
Figure 5 - Using ruff for linting code
I'm extremely satisfied with having completed the first version of this little project, as I was able to learn new aspects of programming, cultivating them on the experience gained so far.
The project could be further expanded, since currently the reading of the data is performed by a single device, but with a small integration in the code of the collector, the data could easily be sampled from multiple devices, as long as these are reachable via Ethernet.
For completeness, given my curiosity and my recent study about "Docker containers", I would have liked to containerize the entire application, effectively making it a package of micro-services, but due to the fact that the PC on which I have developed the application turns out to have Windows Server 2016 as an operating system, I should have installed the Enterprise version of Docker Desktop.
Another integration that I would really like to do is to create the front-end of the application, using a simple and robust framework, thus giving it a more user-friendly look.
The main reason that prompted me to create this application was to create a project from scratch that could reconcile my years of experience in the IIoT world with my great passion for programming and back-end development.
If you've made it this far, I thank you immensely for reading, and if you have any advice to give me, I invite you to contact me through my social networks or through the contact form.