Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » unsigned char or u8?

This thread is locked; no one can reply to it. rss feed Print
 1   2 
unsigned char or u8?
Albin Engström
Member #8,110
December 2006
avatar

From time to time I see people using u8 s8 u16 etc instead of the usual unsigned char etc.

What exacly is u8? is it just u8 defined as unsigned char?

Will using u32 instead of unsigned int make sure that the compiler on every system will leave it untouched?

Are there any downsides to using this system instead of the usual?

Thanks :).

anonymous
Member #8025
November 2006

There should be some typedefs for integral types of fixed type in cstdint (although probably not all compilers have this).

You can be quite sure that u8 is typedeffed as unsigned char (since the size of char is guaranteed to be 1). However, there are no guarantees for the other types. So it you typedef unsigned int as u32, there is no guarantee that it will be actually 4 bytes with each compiler. (If I'm not mistaken, each compiler implementation provides suitable typedefs in the cstdint header, so that you won't have to worry about the size of types with this particular compiler.)

On the other hand, I've never been tempted to use fixed-sized integral types anyway.

Albin Engström
Member #8,110
December 2006
avatar

I see, I guess I just have to make sure no complications will arise on different systems then.

Thanks :).

EDIT:
I thought I would be smart and made my own definitions to make the code more readable:

#define ubyte unsigned char
#define byte signed char

#define uint unsigned int
#define int signed int

But as you can see this causes some problems as uint becomes both signed and unsigned, my question is, can I bypass this problem and if so, how?

The order of the definitions does not matter, apparently.

Elias
Member #358
May 2000

Quote:

I thought I would be smart and made my own definitions to make the code more readable

I think it becomes less readable, it's best to just use the normal C++ types. And in the (rare) cases where you need fixed bit sizes, use cstdint (stdint.h in C99), no need to #define your own.

--
"Either help out or stop whining" - Evert

Albin Engström
Member #8,110
December 2006
avatar

    signed char *value_name;
    signed int *value_subtopic;
    signed char **subtopic;

    signed int *integer_values;
    double *double_values;

I think this is very irritating to read, but It's personal taste of course.

I'll have to do it this way I guess :/.

torhu
Member #2,727
September 2002
avatar

A plain 'int' is always signed. Never write 'signed int', it's just a waste of space.

The only reason ever to use the 'signed' keyword is with chars, since they can be either signed or unsigned by default. If you're using chars for small integers, as opposed to for characters, it's common to use 'unsigned char' or 'signed char'.

Albin Engström
Member #8,110
December 2006
avatar

How sure are you about "always"?

Thanks.

EDIT: I mean, if int is always signed then why can I write signed int?

torhu
Member #2,727
September 2002
avatar

Yes, I am sure.

I don't know why they chose to add signed as a keyword with only a single use case, but that's what it is. Seems a bit silly in hindsight. Probably 'historical reasons'. Or they felt it made such a nice pair together with unsigned... ::)

Albin Engström
Member #8,110
December 2006
avatar

Ok, thanks, from now on I'll program as if that's true.

Wierd :P.

Evert
Member #794
November 2000
avatar

Quote:

You can be quite sure that u8 is typedeffed as unsigned char (since the size of char is guaranteed to be 1).

Bzzzzt!
sizeof measures size in units of a char. That doesn't say a char has to be 8 bits.

Albin Engström
Member #8,110
December 2006
avatar

Evert said:

Bzzzzt!
sizeof measures size in units of a char. That doesn't say a char has to be 8 bits.

Really? Interesting.

Speedo
Member #9,783
May 2008

Some people prefer typedefs for integral types (use a typedef, not a #define). I think it's easier to read and deal with, personally. Most of my projects will have a types header that goes along the lines of

#include <boost/cstdint.hpp>

typedef int                     SInt;
typedef unsigned int            UInt;
typedef boost::int_least8_t     SInt8;
typedef boost::uint_least8_t    UInt8;
typedef boost::int_least16_t    SInt16;
typedef boost::uint_least16_t   UInt16;
typedef boost::int_least32_t    SInt32;
typedef boost::uint_least32_t   UInt32;
typedef boost::int_least64_t    SInt64;
typedef boost::uint_least64_t   UInt64;

#define nullptr 0

Quote:

Really? Interesting.

Indeed. Essentially all modern PC platforms will have an 8 bit byte/char size, but there are other platforms that C/C++ can be used on where the size differs.

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

Most of my projects will have a types header that goes along the lines of

Only ones that think its a good idea to produce harder to read code, and duplicate types for no reason what so ever.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Albin Engström
Member #8,110
December 2006
avatar

Ah, so that's what typedef is used for.. :P.

Thanks.

Speedo
Member #9,783
May 2008

Quote:

Only ones that think its a good idea to produce harder to read code, and duplicate types for no reason what so ever.

You do know that expressing your opinions in a way that make you look like an arrogant ass is the best way to make people ignore you... right? ::)

Evert
Member #794
November 2000
avatar

Quote:

You do know that expressing your opinions in a way that make you look like an arrogant ass is the best way to make people ignore you... right? ::)

He does have a point, you do realise that, right?
There are standard datatypes. Use them.

Speedo
Member #9,783
May 2008

Quote:

He does have a point, you do realise that, right?
There are standard datatypes. Use them.

Then you shouldn't have a problem telling me which integer type to use that will be at least 32 bits wide on every platform.

Thomas Fjellstrom
Member #476
June 2000
avatar

(u)int32_t is available on every C99 compiler, and some C89 compilers that like to include bits of later standards.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Speedo
Member #9,783
May 2008

Quote:

(u)int32_t is available on every C99 compiler, and some C89 compilers that like to include bits of later standards.

C99 != C++

Evert
Member #794
November 2000
avatar

Quote:

C99 != C++

It might still work though.
In either case, you can provide it yourself by picking a standard name rather than making up your own.

torhu
Member #2,727
September 2002
avatar

Quote:

Then you shouldn't have a problem telling me which integer type to use that will be at least 32 bits wide on every platform.

long is guaranteed to at least 32 bits.

Elias
Member #358
May 2000

I don't think long or any other C++ datatypes have a guaranteed bit size - as Evert pointed out, the standard does not even require a char to be 8 bits. What you want is int_least32_t.

--
"Either help out or stop whining" - Evert

torhu
Member #2,727
September 2002
avatar

Wrong, the types integer types have guaranteed minimum types. Look it up.

Elias
Member #358
May 2000

I did, but I only read the "simple types" or whatever it is called section of the standard. I assumed it would mention bits there if it does at all...

[edit:] "Fundamental Types" it was. And no, the only occurrence of the number "32" in the C++ standard is to tell that std::atexit() must support registering at least 32 functions

I only have a draft though, so maybe it changed in the final version?

--
"Either help out or stop whining" - Evert

Tobias Dammers
Member #2,604
August 2002
avatar

The C++ standard doesn't require minimum sizes, but it does require minimum ranges - see this page. When applied to a 2's complement machine (like pretty much every single machine currently in use), those ranges translate to the following minimum type sizes:
char: 8 bits
short: 16
int: 16
long: 32
Also, it is a recommendation (but not a requirement) that a char corresponds to the smallest unit a machine can address, and that an int is the 'native' size of the platform. As the article states, an implementation meets the standard by making all 4 integer types 32 bits wide.
The number 32 doesn't appear because the standard requires a range of at least -32767 ... 32767.
Minimum sizes are generally fine, unless you do silly things like brute-force pointer casts, e.g.:

int i = 12345;
char* c = (char*)(&i);
for (int j = 0; j < 4; ++j)
  c[j] = 1;

The above obviously breaks when sizeof(char) * 4 > sizeof(int).

---
Me make music: Triofobie
---
"We need Tobias and his awesome trombone, too." - Johan Halmén

 1   2 


Go to: