Don't think about wstring, think about Unicode. Unicode has a large set of "code points", i.e. imagine every language and all the characters available (a LOT!). You cannot fit all of these into the defacto unit of character storage: the byte (i.e. range 0-255). Back when computers were simpler there was only ASCII (range 0-127).
If you want to have a large sets of "codes" you can store these in different ways. Ignore Unicode for a minute and think of all the ways that you could do it. E.g. if you know ASCII has a spare bit, you can use this to specify that the code spills over into the next byte, i.e. multi-byte. Or maybe you decide not to use a byte, but to use 2 bytes, or 4 bytes, and chain these together. You might make this decision based on the architecture of the processors you are targeting, or how many languages you want to support.
std::string can hold UTF-8 encoding (i.e. multi-byte 8 bit chars). This is possible because std::string is not null terminated, it is 8 bit pure. I.e. you can store '\0', and any char in a string. So std::string is backwards compatible with null terminated 8 bit strings and ASCII. std::wstring works like std::string but holds "wide characters" (i.e. >8 bit) and is not backwards compatible with 8 bit strings.
std::string and 8 bit ASCII strings are "narrow" and std::wstring are "wide". Note, when we say wide, the size of a character is not specified as it is platform/compiler specific.
So your decision is: how to support Unicode, given the above information.
You might look at the APIs you are going to use, e.g. if you are only using Allegro, you might use std::string and UTF-8, because that is what Allegro uses internally. Otherwise you have to convert any wide strings to UTF-8 (narrow) for Allegro to use.
If you are using a library that only supports wide strings then you might use wide strings exclusively. Some APIs support both with a define.
If you are writing a library to be made public you might want to bear all this in mind, that some people might want to use narrow, and others wide, chars. Most libraries tend to assume ASCII, or narrow encoding. If you are including Windows, then I think you really have to support wide encoding because it only really supports localisation properly using wide encoding (UTF-16 in this case). All of the newer .net stuff uses this internally. It's a PITA!
Soooo... if your question is related to Allegro, I'd say use narrow encoding. You can have the simplicity of ASCII strings, and use UTF-8 Unicode to localise, which Allegro also uses. If you use wide strings you'll just have to convert them to narrow ones every time you call a text rendering function Allegro.