When I was doing the development of the company’s data dashboard, I often pulled tables from various databases for calculation. API response time decreases when the pull time range is wide or when there are multiple tables. This post documents what I’m trying to optimize the process and other tricks that can improve program performance.
1. Change data storage structure
In addition to data types such as str and int, common Python data storage structures include list, tuple, and dict.
List should be a data structure that anyone who has learned any programming languages will be familiar with. A list is an array. Assume that there are a lot of data stored in the list today, and I need to retrieve specific data, and I have to search all over, as follows.
for i in someList:
print(i)
Time complexity is O(n).
If your data storage today is two-dimensional, such as user data in different countries. The first thing you need to do is to find the country, and then find the data you want, as follows:
for i in someList:
if i == someCountry:
for j in someList[i]:
print(j)
This time yime complexity up to O(n²).
Can tuples vs lists improve performance? As far as I know, there is no clear indication that tuples reduce time complexity. Because the principle of tuple is list, but tuple is a list whose length cannot be changed. When he first creates it, he requests a fixed length from memory. In addition to needing space for initialization and memory, List also needs a temporary storage area for future expansion of data. Therefore, in a tuple of the same data size, the required storage space is smaller than that of a list, and the space complexity is also lower.
Next, I will introduce a powerful function of python, which is dict. dict is short for dictionary. Since it is called a dictionary, there is a relationship between key and value. That is, the value is stored in a unique key. So if you want to query a specific information today, it is as follows:
print(someDict["yourKey"])
Time complexity is O(1).
Even with the example above, the data storage by country is as follows:
print(someDict["someCountry"]["yourKey"])
Time complexity still is O(1).
So in terms of data storage, I prefer Dict Tuple List
2. Use Redis
Redis — Powerful caching database. If your data doesn’t need to be updated immediately. (Not the Monitor). Redis is used in many places. When an API pulls a Table from DB to use, other APIs may also need it. At this time, if you have to re-enter SQL or NoSQL commands, the performance will be wasted. If we pull out the data of this table and temporarily store it in Redis, it will take about ten minutes. During these ten minutes, if your other API needs to use the same data, there is no need to re-download the SQL and DB request data. Requesting data from Redis is much faster.
Alternatively, return from your API can also be temporarily stored in Redis. Sometimes the user may refresh the page and resend the request. If these requests need to be recalculated in the background and re-request data from DB. This is very wasteful of performance. If you have saved the returned data in Redis at this time, you can return the result directly without any calculation.
3. Asyn functions for FastAPI
In the FastAPI framework, there is a very novel function, which is Async. Basically equivalent to Node.js’ async. Async allows you to wait for response times, temporarily store state, release thread, and handle other needs.
async def someFunc():
await something
Remember that await can only be used in async functions. I’ve tested it myself so far. If your function does not need the await object, it is recommended to remove async. No need to wait for processing to remove async functions for better performance.
4. Use multiprocessing
The literal meaning of Multiprocessing is easy to understand, that is, multiple process handle different tasks separately. It has been discussed on the Internet that multiprocessing is more efficient than multithreading in python. So far, I haven’t tested it myself, so I can’t draw conclusions. This article first introduces the two multiprocessing methods I currently use.
from multiprocessing import Pool
with Pool(processes=2) as p:
print(p.map(someFunc, [1, 2])
Pool can call some functions in parallel through the map function. Remember that the function must be written in the outermost layer.
from multiprocessing import Process
p = Process(target=someFunc, args=(1,2))
p.start()
#
# The original process continues to run
#
# Wait for p process done
p.join()
The difference between the Process method and the Pool is that after the Process is created, the original Process will continue to execute the program. Pool means that after the original process enters the pool, it will be divided into several processes according to the number of tasks.
Have fun~