InsideDarkWeb.com

How many selects per second can a mysql server run?

I’m writing a business plan and I have to simulate the cost when my website will be reach from 500,000 unique visitors.

  • visitors: 500,000
  • pageviews: 1,500,000
  • spider pageviews: 500,000
  • total pageviews: 2,000,000

Each page does 50 queries +-

  • queries per day: 100 Million
  • per hour: 4 Million
  • per minute: 70,000
  • per second: 1,200
  • peak: 3,000

Doing this calculation I need to perform 3,000 queries second… what kind of server can handle it?

The problem is: actually my site is doing 2,000 visits day, and having -+ 150/200 queries / second…
starting from this point I’ll expect 50,000 queries / second.

How many servers I need in cluster or replication do manage this job?

Server Fault Asked on November 22, 2021

7 Answers

7 Answers

There are too many things that can effect your queries per second, please do not trust my data without testing yourself. I post my speed test result here to help someone to estimate the qps with current (2018-09) mysql database and machine. In my test the data size is less than server memory (that dramatically reduces IO and enhances performance a lot).

I use a one cpu 3.75GB memory, 100GB ssd, gcp cloud mysql server instance and get:

  • 1 client, one sql one row read: 799 sql/second.
  • 50 clients, one sql one row read: 6403 sql/second.
  • 50 clients, one sql one row write: 4341 rows written, qps. 4341 sql/second.
  • 1 client, 30k row write per sql: 92109 written rows/s .

Answered by bronze man on November 22, 2021

For large "hot" datasets, it is probably worth the investment in time to convert to a "big data" scheme, it is what they are for. For example, if you have a vast quantity of data to retrieve, but you never rewrite, but only append new data, look at Apache Hive. Browse around, their is usually a flavor you can interface easily enough to existing code, that will also prevent the heartburn of running out of cache space.

Answered by BHGalyean on November 22, 2021

Judging by your comments, the biggest factor will be your data set size, or at least the size of the "hot" data set. 3,000qps or even 8,000qps on a 16-core server is not a problem at all as long as the server rarely has to go to the disk to satisfy the query. Once the active data set exceeds the amount of memory InnoDB is using to cache it, your performance will drop off rapidly.

Answered by Elliott on November 22, 2021

I used to work for an e-commerce company with a website that had several million page hits per day. We had a single DELL PE 1750 with 2 single core CPUs and 2GB of RAM, database size approx. 4GB. At peak times this server handled up to 50k+ queries per second.

Having said this: the database was well structured, all queries were finely tuned (we had weekly sessions analysing the slow query logs and fixing up queries and indexes) and the server setup was also fine-tuned. Caching is definitely a good idea, but MySQL does that anyway, you just have to analyse the performance and then fine tune how your memory is used (query cache vs other options).

From that experience I can tell you that the highest impact is caused by missing indexes, wrong indexes and bad database design (e.g. long string fields as primary keys and similar nonsense).

Answered by wolfgangsz on November 22, 2021

As Ignacio remarked, you may want to look into caching. In the cms or perhaps even in front of the stack. 50+ queries for every (every!) page truly is a lot.

Answered by Joris on November 22, 2021

This really can't be estimated without knowing anything about the specific queries you're running, the database scheme and its size.

A simple SELECT on an indexed column is quite a different beast from a couple of JOINs based on non-indexed ones... and of course things change a lot if the involved tables contain 1K records or 1M.

Also:

  • What is your current hardware configuration?
  • How much of its power (CPU, RAM, Disk I/O) is your server using under the current load?

Answered by Massimo on November 22, 2021

It all depends on how complex the query is, and how much memory the servers have, and how fast the disks are.

If the queries are very simple, or very well tuned then a single large database server can handle that. If however the queries are very complex (or simple but poorly tuned) then you'll need several servers.

Answered by mrdenny on November 22, 2021

Add your own answers!

Related Questions

nginx + php 5.6 serving empty pages, can’t find error log

1  Asked on August 10, 2020 by john-moore

       

How to limit memory usage by application in linux?

4  Asked on August 7, 2020 by edomaur

   

Tool to maintain IBM x3650

0  Asked on August 7, 2020 by frantiseknebojsa

   

ansible register on template is not working

1  Asked on August 6, 2020 by boaz

 

inactive option not working for pam_lastlog.so

1  Asked on August 5, 2020 by jakov-sosic

     

IPVS transmitting packets to incorrect backends

0  Asked on August 1, 2020 by vignesh-sp

         

Curl to host and port not working from inside Docker container

0  Asked on July 26, 2020 by bhanu-praveen-g

   

rsync unexpected remote arg: [email protected]:/path

2  Asked on July 21, 2020 by pdiracdelta

   

Ask a Question

Get help from others!

© 2021 InsideDarkWeb.com. All rights reserved.