Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » How to Track Data Changes in Database

This thread is locked; no one can reply to it. rss feed Print
How to Track Data Changes in Database
Fishcake
Member #8,704
June 2007
avatar

I store game data inside a MySQL database. The game data can be modified via web-based admin page written in PHP. Is there a nice way I could automatically track these changes inside an SQL file, so that I could track it using git and easily push the changes to development/staging/production server?

For example, when I create a level, an insert statement will be generated:

INSERT INTO `levels` ...;

Then, when I edit the level in the future, an update statement will be generated instead:

UPDATE `levels` SET ... WHERE `id` = ...;

These files are then checked into git. When the time comes, I will push these changes from the local repo to the dev repo. The dev server keeps track of the last sql file it has executed. Upon receiving the changes, the server will execute the new sql files sequentially.

Or maybe there's a better way? I just want to have my data inside git.

The Master
Member #4,498
April 2004
avatar

I needed to do something similar with a SQLite database. I had actually written my own server program in C++ using boost sockets. The other thing that comes with boost sockets is a nice signal-slot API that I was able to use to fire a signal down an HTTP socket to connected clients.

But I think your scenario is a little different.

If you've written your own PHP scripts (i.e. you're not using a pre-built interface like phpbb), you can always just create a database log in the form of a text file that says what table was edited, what cell and rows, and the time it was edited. Then when the game queries the database, your php script can respond with a notification from your log file.

Will that solve the problem?

We can only do what we feel is right each moment as we live it.

bamccaig
Member #7,536
July 2006
avatar

You're talking not about dynamic data, but static data. There's no known way to automatically track these changes, but personally I just script everything on disk, track it with Git, and then execute against the database server(s). You can even automatically execute the scripts against test and production servers at various stages if you trust your scripting enough.

Making your changes with a Web application isn't much different. You'll just have to generate the scripts from the Web application based on what capabilities you programmed into the Web site, invoke Git to track them, and invoke a MySQL client to execute them against the database. Simple enough to do. Just make sure you add sufficient error handling and wrap database operations in transactions that can be backed out if something goes wrong (perhaps also automate database backups if dynamic data matters).

I like to track "indexed" SQL scripts that if run in order will create a database and bring it up-to-date (structure and data). E.g.,

* ~/src/projectX/
`--* .git/
   `--* (...)
`--* db/
   `--* changelog/
      `--* 0000-CreateDatabase/
         `--* (...initial script(s) to create the database...)
      `--* 0001-SomeChange/
         `--* (...scripts to make a chance to the original structure and data...)
      `--* 0002-SingleScriptToInvokeSomeChange.sql
      `--* (...)
   `--* fn/
      `--* (...current scripts to [maybe] drop and create each SQL function...)
   `--* sp/
      `--* (...likewise for stored procedures...)
   `--* tbl/
      `--* (...likewise for tables, except no dropping for safety...)
   `--* tgr/
      `--* (...likewise for triggers...)
   `--* vw/
      `--* (...likewise for views...)

It's easy enough to write a program to automatically execute the db/changelog/ in order. If it also tracks its progress somewhere for each server then it should be possible to automatically continue with new scripts.

When my scripts mess with dynamic data I am obviously very careful to preserve it and usually those are run by hand instead of automatically, rolled back initially, and the results verified prior to committing them.

bambams@test-chamber-1:~/src/projectX (master -> origin/master)$ push-db -c master --commit --force testing
testing e3b552 => 11dfa2
backup written to /var/backup/projectX/db/testing/20140725111105121.bak
log file is /var/log/build/projectX/db/testing/push-20140725111105121.log
transaction started
0042-AddWaterLevels/0000-level92-water-castle.sql
0042-AddWaterLevels/0001-level93-water-maze.sql
0042-AddWaterLevels/0002-level94-water-tower.sql
transaction committed

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Fishcake
Member #8,704
June 2007
avatar

Thank you for the feedbacks. :)

@bamccaig: Yep, this is for static data. We are actually already using a similar solution to yours for the schematic changes (though we use timestamp instead of index, but it probably doesn't matter anyway.) For the schematic changes, we just simply write the scripts by hand. But it's not possible for the data, because the people who will be making the changes aren't programmers, so scripting by hand is out of the window. That's why I thought of generating the sql, because at least they know how to do commit files into git. :P

@The Master: I was thinking of some sort of logging system that keep tracks of the changes to the data, but that seems like a lot of work to be done, and time is against me right now. :-/

@Edgar: Not sure about that. I haven't explored the "advanced" features of MySQL yet (I only know tables :P) I'll take a look on triggers.

Another idea I had is to just take the dump of the static data, track it in git, and from time to time, generate a dump from the development database, "diff" it (not sure how...) with the latest version in git, and then check in those changes.

MiquelFire
Member #3,110
January 2003
avatar

The dump the static data and track is the way to go, but you don't need to manually diff it. Just export the data, then import as needed. Though I do question why this data is in the database in the first place. Player data depends on IDs from it or something?

You can use JSON as a text format.

---
Febreze (and other air fresheners actually) is just below perfumes/colognes, and that's just below dead skunks in terms of smells that offend my nose.
MiquelFire.red
If anyone is of the opinion that there is no systemic racism in America, they're either blind, stupid, or racist too. ~Edgar Reynaldo

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Fishcake
Member #8,704
June 2007
avatar

Though I do question why this data is in the database in the first place. Player data depends on IDs from it or something?

Yup, the player data is linked to their ID. And it's also easy to add static data progressively. Just put the server on maintenance mode, add new data to the database (via the admin page), put the server back to live, and the users will get the new contents.

beoran
Member #12,636
March 2011

Triggers are the way to go, though for such an advanced use of SQL I'd normally recommend PostgreSQL since that supports the SQL standard and advanced uses of SQL like this better, and also has a procedural language that makes triggers even more powerful.

bamccaig
Member #7,536
July 2006
avatar

IIRC, triggers are non-standard. :P Additionally, they are kind of a hack, and they can lead to action at a distance that can be confusing/misleading/etc. I don't think triggers are even useful here because the OP wants to generate code files that can be tracked by a content management system (Git) and at some later date execute those code files to impart the changes contained within on database clones at differing stages of development and release. While some SQL dialects can write to files, it is non-standard AFAIK and generally not very powerful or flexible. It also implies that any and every change to the database should be "pushed" like this, whereas it's entirely possible, especially in development, for a developer to begin work on something and revert it after. Using triggers for this sounds like a very bad idea. It is complicated, ugly, and completely unnecessary.

Matthew Leverton
Supreme Loser
January 1999
avatar

MySQL has binary logging which can be used for realtime replication and point-in-time restoration.

It also has plain text query logging.

Triggers sound like a bad choice here.

m c
Member #5,337
December 2004
avatar

The triggers could log all the required information into new row in a log table.

Then you can use a query on that log table to generate the sql command file.

Triggers were my first idea too, because that is how I've seen this done before.

Plus it is more database agnostic, just get the command data into the log table, and then you generate SQL code from the log table, no need to get more database-specific then that.

But if you don't already know triggers then it might not be worth learning yet another thing. I only have experience in Microsoft TSQL, and Oracle and Postgres PL/SQL triggers so I don't know about mysql nor sqlite triggers, they never used to have them but now they might.

I don't like mysql, I like sqlite and then postgres or oracle. But if you use visual studio then check out SQLServer. They even have an embedded library version I recently heard.

(\ /)
(O.o)
(> <)

Go to: