I suppose if you view bool as a one-bit arithmetic type then incrementing true is an overflow and that results in undefined behaviour - so the value could be anything.
But we have tons of defined overflow scenarios. Unsigned overflows, signed overflows, float overflows. It seems strange they would just leave that one alone.
My opinion--and it's about as valuable as that--is that the more things that "don't immediately make sense" the more a programmer has to keep in his mind at all times while programming. And the more balls a programmer has to juggle at once, the more likely you hit create an error.
I'm sure there's some brain function / sign-of-intelligence that corresponds to how many ideas a person can have in their brain at one time. And the more taken up by the programming language, the less available for comprehending the task at hand. Which is why many people solve complex problems outside of code... by solving the problem separated from implementing that problem, you free up some "slots" in your brain to help take a "too big to fit" problem and hopefully fit it.
 The fact that it's post increment gives it one possible use, though it's not hard to find alternatives:
Actually, have you tested for an error in that code? What happens if you increment a bool 65536 times? Does it eventually wrap a 8/16/32/64-bit integer around to zero?