I’ve had that happen with database logs where I used to work, back in 2015-6.
The reason was a very shitty system that, for some reason, threw around 140 completely identical delete queries per millisecond. When I say completely identical, I mean it. It’d end up something like this in the log:
2015-10-2213:01:42.226=deletefrom table_whatever
where id =1and name ='Bob'and other_identifier ='123';
2015-10-2213:01:42.226=deletefrom table_whatever
where id =1and name ='Bob'and other_identifier ='123';
2015-10-2213:01:42.226=deletefrom table_whatever
where id =1and name ='Bob'and other_identifier ='123';
-- repeated over and over with the exact same fucking timestamp, then repeated again with slightly different parameters and different timestamp
Of course, “no way it’s our system, it handles too much data, we can’t risk losing it, it’s your database that’s messy”. Yeah, sure, I set up triggers to repeat every fucking delete query. Fucking morons. Since they were “more important”, database logging was disabled.
Having query logging enabled on a production database is bonkers. The duplicate deletes are too but query logging is intended for troubleshooting only. It kills performance.
I saw php error logs cause a full disk in a few minutes (thankfully on a shared dev server), thanks to an accidental endless loop that just flooded everything with a wall of notices…
And, working with a CMS that allows third-party plugins that don’t bother to catch exceptions, aggressive web crawlers are not a good thing to encounter on a weekend… 1 exception x 400000 product pages makes for a loooot of text.
I’ve had that happen with database logs where I used to work, back in 2015-6.
The reason was a very shitty system that, for some reason, threw around 140 completely identical delete queries per millisecond. When I say completely identical, I mean it. It’d end up something like this in the log:
2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; 2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; 2015-10-22 13:01:42.226 = delete from table_whatever where id = 1 and name = 'Bob' and other_identifier = '123'; -- repeated over and over with the exact same fucking timestamp, then repeated again with slightly different parameters and different timestamp
Of course, “no way it’s our system, it handles too much data, we can’t risk losing it, it’s your database that’s messy”. Yeah, sure, I set up triggers to repeat every fucking delete query. Fucking morons. Since they were “more important”, database logging was disabled.
Having query logging enabled on a production database is bonkers. The duplicate deletes are too but query logging is intended for troubleshooting only. It kills performance.
Take a wild guess as to why it had to be enabled in the first place, and only for Delete queries.
I saw php error logs cause a full disk in a few minutes (thankfully on a shared dev server), thanks to an accidental endless loop that just flooded everything with a wall of notices…
And, working with a CMS that allows third-party plugins that don’t bother to catch exceptions, aggressive web crawlers are not a good thing to encounter on a weekend… 1 exception x 400000 product pages makes for a loooot of text.