On a Drupal site the script backs up the site into separate files:
- Site (core, theme and custom) files
- Content (user) files
A typical set looks like below:
During the script run I noticed it was taking an unusual amount of time to do the DB dump and a quick check showed the available disk space decreasing at an alarming rate.2020-02-06-1326.sitename.stage.db.sql2020-02-06-1326.sitename.stage.configs.tar.gz2020-02-06-1326.sitename.stage.files.tar.gz2020-02-06-1326.sitename.stage.repo.tar.gz2020-02-06-1326.sitename.stage.site.tar.gz
I knew that meant a something was growing at an unreasonable rate and need to find out where the issue was.
A quick and easy way to do this is a simple Drush command :
drush sql-query "SELECT table_name AS 'Table', ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Size (MB)' FROM information_schema.TABLES WHERE table_schema = 'database name' ORDER BY (data_length + index_length) DESC;"
The above command will list the tables in order of highest to lowest.
In my case it was the watchdog table. It had grow to the size of 11GB in one day. A fellow developer had a piece of code that was generating a warning error at the rate of 10,000 an hour. A quick fix of the code and a flush of the watchdog table cleared the issue up.