System architecture is the most important part of any system. System architecture is the backbone of any system so no matter how good the code is, if the architecture is poor, your system will be unreliable, unscalable and not robust.
Horizontal vs. Vertical Scaling
Vertical Scaling is the idea of increasing the resources for a specific node e.g. adding more memory to improve performance.
Horizontal Scaling is the idea of increasing the number of nodes, therefore decreasing the load on a single server.
Generally vertical scaling is easier to implement but it has limitations, there is only so much memory you can add to a computer.
The source of this information was taken from the Cracking The Coding Interview Book
Content Delivery Network
- AWS CloudFront
Can there be any pre-processing / background processing that can be done independent of other processes that could optimise performance.
Load balancing allows a system to distribute load evenly across servers so that one server doesn’t crash and take down the whole system.
It requires having a system of multiple nodes all with the same replicas of the code across a network.
Web Server, Load Balancer and Reverse Proxy
- AWS Elastic Load Balancer
AWS implementation for a load balancer in front of your servers.
Relational databases can get slow as systems grow bigger and more complicated joins are required. So they should generally be avoided.
Denormalization is a strategy that can be used to avoid this problem to increase performance. It is the process of trying to improved the read performance of database at the expense of losing some write performance. This is done by adding some redundant copies of data or grouping data.
e.g. if you have two tables in database, a table of projects and a table of tasks, where projects can have multiple tasks. If it is required to get a project and some information about the tasks then a join on projects to tasks table can be expensive. So it might be better to store the project name with each task. This decreases write performance and introduces redundant data, but the read performance is improved.
NoSQL databases are designed to store abstract pieces of data in a way that scales better than an SQL database and the concepts of joins does not exist in a NoSQL database.
In-memory cache’s can deliver very rapid results. It generally uses a simple key-value pairing and sits between your application and the data store.
When a request is made for a specific piece of information, the cache is queried first to see if it exists their already, if it doesn’t then the data is received from the data store. The result may then be stored in the cache.
The cache cache a query and it’s results directly, or it might cache a specific object like a rendered part of a website.
Asynchronous Processing and Queues
Process that are going to take a long time should be asynchronous (run in the background and don’t block a thread). This prevents a user from having to wait for a process to completely.
A queue can be used process jobs over time and then once the job is complete, the user can be notified of its completion.
- AWS Route 53
Domain Name System (DNS) web service.
It connects user requests to AWS infrastructure such as an EC2 instance or load balancer.
Deployments – Servers or Serverless
- AWS ECS
AWS Elastic Container Service is a fully managed container orchestration service. ECS clusters can be run either using AWS Fargate, the serverless compute for the containers or and EC2 instance.
AWS Fargate is fully managed however has its limitations such as no gpu usage and limits on memory and cpu units available.
These limitations can be overcome by using EC2 instances which are scalable to much larger sizes and provide better options for compliance and government requirements.