Slashing My Response Time Like a Samurai

Had a battle last week where on a personal project, retrieving data from my database resulted in a 504 gateway error. Funny because just 24 hours ago everything seems to be fine. In the couple of weeks leading to that, I have been obsessed about getting all my response time to milliseconds, at least to triple digits like 100ms to 500ms. Got really into it and tried a few things.

  1. Return only what is needed:- What I did was to dump everything to the client-side. This was quite ok for the first thousands transactions, but as our transactions started crossing ten thousands, we started experiencing a little delay in retrieving users' transactions, it got worse in the admin area where we had to get all transactions recorded in our database(mind you we had applied pagination). So instead of returning everything about a transaction when retrieving multiple records, I selected what needed to be captured in the UI.  
    A quick example: if the transaction’s table has 20 columns, and the UI just needed 7 columns displayed, my endpoints selects those 7 columns and returns to the UI to integrate. That way I dont load unnecessary data across the network that delays the response time.

  2. Lazy Loading over Eager Loading:- Lazy loading and Eager loading is popular in a lot of ORMS. Basically in database designs there are a lot of relationships, especially when you are trying to model real world data. Take my transaction as an example. Users perform transactions, whether its deposits or withdrawals in almost every web application. When designing a database it's a good idea to have two different tables capture these scenarios. A table to store users details and another to store transactions. We then create some sort of relationship to tie a transaction to a user, known as a one-to-many relationship where a user has many transactions, and a transaction just has one user. When storing this transaction in our database, we store the userId. It's all good until the product designer wants the frontend developer to not show just the userId, but the first name, last name and any other information that we can get from the user.

    At first as the rookie that I am, I decided to make an extra call from the database when returning transaction records. So after I get the records, I then loop through the records, get the userId, then using the .findOne, I get the user’s record and return that. Trust me this worked fine until we started crossing ten thousand transactions and I started noticing our response time was getting a lot slower. I tried eager loading, observed the response time for a couple of weeks, and it did note get better. Not just that it dumped more data, when I retrieved transactions, it loaded everything about the user, and the most annoying thing about eager loading, is that if any other table is linked to the parent table, and you enable eager loading, it dumps all that too, what a mess of a solution.

    Lazy loading however proved to be more effective. The beauty of lazy loading is that the related entities are only fetched when they are accessed. It does not just load the related entities whenever you are retrieving a particular record, you have to explicitly access it. This really helped reduce our response time. There is a lot of material on the internet about when and when not to use lazy or eager loading. I suggest you do your research and tell me how you get to use it in your various projects.

  3. Caching with Redis:-  I wager I could skip the first two and this would still be effective. Figuring out the right way to cache was not really straightforward. First thing I tried was to cache using typeORM. In typeORM you can add cache : true  to your config. What this does is to create a query-result-cache table, which stores responses for a particular time and returns that instead of hitting the table where the record is stored. Let me break this down, so when I first added this to my transactions table, typeORM stored my responses in the query-result-cache table, whenever i make another API call within 30 seconds, it goes to the query-result-cache table and retrieves the last data stores that matches the query. This proved to be the solution at first, at least for the first week. Then we started noticing retrieving user’s transactions was taking a lot of time. Whenever I make a transaction, before it gets updated on my list, 5 seconds would have gone. Did not make sense to me, I knew I haven't made any serious update in a week.

    After multiple complaints I decided to track the response time on staging, it was in milliseconds, I did on production and it was in seconds, hmmmmmmm. I logged into my admin dashboard and it was the worst, transactions were not loading(of course I blamed the frontend engineer for that). I got a reality check when transactions stopped loading, then my dms were flooded with complaints. I decided to check it on my local machine and it was the same thing, oh my night was not going well, I tested on staging, the same thing. I must be losing it, because I have not made a push. All my tests were returning a 504 gateway error, like what does that even mean???? With more tests and breaking a lot of things I finally figured that my solution was the problem, query-result-cache had gotten a lot of data stored in the table in such a short time, and instead of being the solution it became the problem. I had to figure out how to move these cache data from my main database, and also make them short lived.

    Well, Redis could do that. Redis is an in-memory database, the beauty of Redis is that it stores data using the key-value model. In my case I made my endpoints the key, while the response from my database my value. Whatever way the frontend developer decided to call my endpoint(filters included), Redis caches that and returns the stored value to the frontend. Be careful not to store a transaction for a long time, trust me. If a user makes a transaction you want the time you set as your ttl(time to live) to be short enough to cache the new transaction recorded.

Not to brag but our response time for getting transactions is currently averaging 207ms. I think it's not bad for an application built with a monolithic architecture and being hosted on a $7 DigitalOcean droplet. If your application is doing way better I would love to know on the comment sections, or you could email me Goodness Ezeokafor. May the code force be with you.