From time to time I see people using u8 s8 u16 etc instead of the usual unsigned char etc.
What exacly is u8? is it just u8 defined as unsigned char?
Will using u32 instead of unsigned int make sure that the compiler on every system will leave it untouched?
Are there any downsides to using this system instead of the usual?
Thanks .
There should be some typedefs for integral types of fixed type in cstdint (although probably not all compilers have this).
You can be quite sure that u8 is typedeffed as unsigned char (since the size of char is guaranteed to be 1). However, there are no guarantees for the other types. So it you typedef unsigned int as u32, there is no guarantee that it will be actually 4 bytes with each compiler. (If I'm not mistaken, each compiler implementation provides suitable typedefs in the cstdint header, so that you won't have to worry about the size of types with this particular compiler.)
On the other hand, I've never been tempted to use fixed-sized integral types anyway.
I see, I guess I just have to make sure no complications will arise on different systems then.
Thanks .
EDIT:
I thought I would be smart and made my own definitions to make the code more readable:
#define ubyte unsigned char #define byte signed char #define uint unsigned int #define int signed int
But as you can see this causes some problems as uint becomes both signed and unsigned, my question is, can I bypass this problem and if so, how?
The order of the definitions does not matter, apparently.
I thought I would be smart and made my own definitions to make the code more readable
I think it becomes less readable, it's best to just use the normal C++ types. And in the (rare) cases where you need fixed bit sizes, use cstdint (stdint.h in C99), no need to #define your own.
signed char *value_name; signed int *value_subtopic; signed char **subtopic; signed int *integer_values; double *double_values;
I think this is very irritating to read, but It's personal taste of course.
I'll have to do it this way I guess :/.
A plain 'int' is always signed. Never write 'signed int', it's just a waste of space.
The only reason ever to use the 'signed' keyword is with chars, since they can be either signed or unsigned by default. If you're using chars for small integers, as opposed to for characters, it's common to use 'unsigned char' or 'signed char'.
How sure are you about "always"?
Thanks.
EDIT: I mean, if int is always signed then why can I write signed int?
Yes, I am sure.
I don't know why they chose to add signed as a keyword with only a single use case, but that's what it is. Seems a bit silly in hindsight. Probably 'historical reasons'. Or they felt it made such a nice pair together with unsigned...
Ok, thanks, from now on I'll program as if that's true.
Wierd .
You can be quite sure that u8 is typedeffed as unsigned char (since the size of char is guaranteed to be 1).
Bzzzzt!
sizeof measures size in units of a char. That doesn't say a char has to be 8 bits.
Bzzzzt!
sizeof measures size in units of a char. That doesn't say a char has to be 8 bits.
Really? Interesting.
Some people prefer typedefs for integral types (use a typedef, not a #define). I think it's easier to read and deal with, personally. Most of my projects will have a types header that goes along the lines of
#include <boost/cstdint.hpp> typedef int SInt; typedef unsigned int UInt; typedef boost::int_least8_t SInt8; typedef boost::uint_least8_t UInt8; typedef boost::int_least16_t SInt16; typedef boost::uint_least16_t UInt16; typedef boost::int_least32_t SInt32; typedef boost::uint_least32_t UInt32; typedef boost::int_least64_t SInt64; typedef boost::uint_least64_t UInt64; #define nullptr 0
Really? Interesting.
Indeed. Essentially all modern PC platforms will have an 8 bit byte/char size, but there are other platforms that C/C++ can be used on where the size differs.
Most of my projects will have a types header that goes along the lines of
Only ones that think its a good idea to produce harder to read code, and duplicate types for no reason what so ever.
Ah, so that's what typedef is used for.. .
Thanks.
Only ones that think its a good idea to produce harder to read code, and duplicate types for no reason what so ever.
You do know that expressing your opinions in a way that make you look like an arrogant ass is the best way to make people ignore you... right?
You do know that expressing your opinions in a way that make you look like an arrogant ass is the best way to make people ignore you... right?
He does have a point, you do realise that, right?
There are standard datatypes. Use them.
He does have a point, you do realise that, right?
There are standard datatypes. Use them.
Then you shouldn't have a problem telling me which integer type to use that will be at least 32 bits wide on every platform.
(u)int32_t is available on every C99 compiler, and some C89 compilers that like to include bits of later standards.
(u)int32_t is available on every C99 compiler, and some C89 compilers that like to include bits of later standards.
C99 != C++
C99 != C++
It might still work though.
In either case, you can provide it yourself by picking a standard name rather than making up your own.
Then you shouldn't have a problem telling me which integer type to use that will be at least 32 bits wide on every platform.
long is guaranteed to at least 32 bits.
I don't think long or any other C++ datatypes have a guaranteed bit size - as Evert pointed out, the standard does not even require a char to be 8 bits. What you want is int_least32_t.
Wrong, the types integer types have guaranteed minimum types. Look it up.
I did, but I only read the "simple types" or whatever it is called section of the standard. I assumed it would mention bits there if it does at all...
[edit:] "Fundamental Types" it was. And no, the only occurrence of the number "32" in the C++ standard is to tell that std::atexit() must support registering at least 32 functions
I only have a draft though, so maybe it changed in the final version?
The C++ standard doesn't require minimum sizes, but it does require minimum ranges - see this page. When applied to a 2's complement machine (like pretty much every single machine currently in use), those ranges translate to the following minimum type sizes:
char: 8 bits
short: 16
int: 16
long: 32
Also, it is a recommendation (but not a requirement) that a char corresponds to the smallest unit a machine can address, and that an int is the 'native' size of the platform. As the article states, an implementation meets the standard by making all 4 integer types 32 bits wide.
The number 32 doesn't appear because the standard requires a range of at least -32767 ... 32767.
Minimum sizes are generally fine, unless you do silly things like brute-force pointer casts, e.g.:
int i = 12345; char* c = (char*)(&i); for (int j = 0; j < 4; ++j) c[j] = 1;
The above obviously breaks when sizeof(char) * 4 > sizeof(int).
Wrong, the types integer types have guaranteed minimum types. Look it up.
Yes, they do. A "char" is size 1 by definition, a short is at least as large as a char, an int is at least as large as a short and should correspond to the machine's native word size and a long is at least the size of an int.
They could all be 8 bits or 32 bits as far as the standard is concerned (and historically have been on different architectures).
Yes, that's how I read it, they could all be 8-bit (of course, in practice, as many compilers will do that as use 9-bit char ). But what it means is, if you need a minimum number of bits, it's best to use something like int_least32_t. Usually you don't and then int is fine.
[Edit:] We are talking about C++ here btw, not C99 which is different (and which that home.att.net site linked above seems to talk about).
Just for fun, here's a quote from "The C++ Programming Language, special edition", §4.6 for you:
In addition, is is guaranteed that a char has at least 8 bits, a short at least 16 bits, and a long at least 32 bits.
It might not be the standard, but Stroustroup tends to get his facts right. He also probably has a copy of the standard on his shelf, which I don't.
The minimum 16 bit shorts ands ints, and 32 bit longs, are mentioned in K&R 1989 edition. I didn't bother trying to find a note about char sizes.
EDIT: Of course, if you're programming an 8-bit cpu, the C compiler might not follow the standard in regards to minimum sizes. I wouldn't be surprized if C compilers that use 8 bit ints exist. And maybe there's no floating point support at all, etc.
Well, if that book was published before 1998, the author definitely did not have a copy of the standard
Good point, edition I've got is from 1997. Although he was actually on the committe, and probably had a bunch of drafts laying about. Besides, I have the 2002 printing.
Well, I figured out the problem - the C++ standard (neither the one from 1998, nor the updated one from 2003, nor the upcoming C++v0) does not specify any minimum sizes within the text of the standard. But, all the above three reference the C90 standard (C++v0 may or may not reference C99 instead from what I understand) and so they inherit minimum sizes from there.
So what you said is right - int is at least 16 bit and long at least 32 bit. However you cannot look that up in the C++ standard, only in the C standard referenced by it