you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

I think if all the things you recommended were done, with the thumbnails and static and css being on other servers, that'd probably give us like 4x-6x the traffic capacity for the main server. And we can probably handle about 4x normal traffic on our current setup without changing anything before it really slows down. So combined that'd be 16x-24x our current traffic.

Agreed

Then once we hit those limits we can start upgrading the main server to have more CPUs, of course. Maybe research what plan you would upgrade to next, so we can do it quickly if necessary.

I'll keep this in mind, but I think we need to plan on horizontal scaling (more servers) rather than vertical scaling (more powerful server). For example with popular hosting company Linode ;) 4 cores is $40 a month, but 8 cores is $160/mo. So we could have 4 servers and 16 cores for the price of a single 8 core server. Splitting the traffic to different servers can be done with a load balancer server in front of the other servers.

"only serve 4 requests at once right now" Can you tell me more about this? Is it a setting in nginx? Can this number be expanded by increasing the number of CPUs? Is that the only way?

This is the number of CPU cores we have. Yes I believe more cores is the only way to serve insane traffic levels, but then you get into maxing out what a single network card can handle as well.

[–]magnora7 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Cool, thanks for the info. I'm all about the horizontal scaling, sounds good to me.