Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Convert degree to Radians effective way

This thread is locked; no one can reply to it. rss feed Print
 1   2 
Convert degree to Radians effective way
Thinkal VB
Member #16,859
May 2018

I was rotating a bitmap around a point and when it completed a rotation it had a flickering effect. So i logged the values of radians and degree.
Log at the end of the rotation.
[Info ]: 358 rad= 5.96667
[Info ]: 359 rad= 5.98333
[Info ]: 0 rad= 0
[Info ]: 1 rad= 0.0166667

But when I converted the 5.98333 back to the degree in google I got 342.8195564
This is the function i am using.
const float PI = 104348 / 33215;
float RenderingEngine::_toRadians(const float& rotDegree)
{
return((rotDegree * PI) / 180.0f);
}

Please help the project is also on Git.
https://github.com/ThinkalVB/NearEarth

Audric
Member #907
January 2001

104348 is an integer, 33215 is an integer, so / is the integer division and results in integer 3. Not good enough :P

Try to use the standard library's M_PI if it's defined, and if you REALLY have to devise your own, use an explicit literal definition with a number of digits that matches the float type. Both methods are better than 104348.0 / 33215.0 (which is what you wanted to do)

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

dthompson
Member #5,749
April 2005
avatar

Audric's got the right idea. Use the standard library (or ALLEGRO_PI), but regardless, it's good practice to write any expression involving real numbers (not just integers) with .-suffixed literals:

double one_third = 1. / 3.;

EDIT replaced AL_PI above, forgot the latter is A4. What it is to be an old-timer...

______________________________________________________
Website. It was freakdesign.bafsoft.net.
This isn't a game!

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Neil Roy
Member #2,229
April 2002
avatar

I always calculate the 180/M_PI or M_PI/180 ahead of time and store the result in a floating point variable (unless you're doing NASA simulations, you don't really need double), though I imagine the compiler will optimize that, I like to be sure, seeing as how that part never changes. Otherwise I do it how Edgar does. It's pretty simple.

I'm not sure why anyone is trying to calculate PI when it is already available, that's like re-inventing the wheel.

---
“I love you too.” - last words of Wanda Roy

Peter Hull
Member #1,136
March 2001

You can use ALLEGRO_PI.

Neil Roy
Member #2,229
April 2002
avatar

And Peter Hull wins! :)

---
“I love you too.” - last words of Wanda Roy

Thinkal VB
Member #16,859
May 2018

Thank you Audric, Edgar Reynaldo, dthompson, Neil Roy, Peter Hull
Thank you all - I knew I was missing something; I was so dump stupid enough to miss that Ahhhhhh..........
I thought my compiler is going nuts ;D;D;D
I used ALLEGRO_PI , Better now

Kitty Cat
Member #2,815
October 2002
avatar

Neil Roy said:

I always calculate the 180/M_PI or M_PI/180 ahead of time and store the result in a floating point variable (unless you're doing NASA simulations, you don't really need double), though I imagine the compiler will optimize that

Depends on how it's written. If you do val * 180.0f / M_PI, the compiler can't optimize because math isn't associative in C. You'll get (slightly) different results depending on whether the multiplication or division is done first, so you need to explicitly order it: val * (180.0f / M_PI), then the compiler can pre-calculate the division result. It's the same as using integers: val / 100 * 100 can't be transformed to val * 100 / 100 and thus optimized to val * 1 (or just val) because the order alters the result.

Additionally, M_PI is a double, so it'll promote any math using it to double unless explicitly casted or until assigned to a variable.

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

Peter Hull
Member #1,136
March 2001

Interesting. I had a look on godbolt, see https://godbolt.org/g/WFtbpR
Just to add to what Kitty said, if you have ang * M_PI / 180 it is compiled to two floating point operations (mul then div) unless you bracket them i.e. ang * (M_PI / 180) or use -ffast-math

So I suppose Neil's pre-computed constants are 100% faster than the naive approach :o Well done, sir.

Audric
Member #907
January 2001

Kitty Cat said:

Additionally, M_PI is a double

I didn't know that, I looked it up, and it seems that all decimal literals are indeed considered double. So, just performing + 0.0 or * 1.0 is enough to promote the computation to double.
The only exceptions are when using format modifiers (#define F_PI 3.1415f) or compilation instructions (GCC's -fsingle-precision-constant). Or, I guess, on CPUs that don't support double arithmetics.

So I suppose Neil's pre-computed constants are 100% faster than the naive approach

Since the 90s, counting instruction no longer tells how much time it will take.
The instruction pipeline can reduce this +100% to +10%, and the difference can even be zero if speculative execution can perform the whole computation with the unused "spots" of the pipeline, while it's waiting for a "if" condition to be resolved.
Math plays very well with the pipeline, as it has no side effects.
It's more important to avoid expensive conversions (integer <-> floating point) than to try reduce the number of individual mathematics operators.

Peter Hull
Member #1,136
March 2001

Audric said:

Since the 90s, counting instruction no longer tells how much time it will take.

Well, I was only kidding, but... can pipelining help if the input of the second op-code is the result of the first? Besides which, for some of us, 1990s is still 'recently invented' ;D

Audric
Member #907
January 2001

I assumed so, but it annoyed me that your jest could be taken at face value - a lot of coders are never taught about what modern computers actually do.

can pipelining help if the input of the second op-code is the result of the first?

I am really not a specialist, but I think so. When you have a computation A + B + C, you don't have to wait for the first addition to be completed before you fetch the value of C from memory.
Everything I've read about optimizing indicates that the people who design CPUs (and compilers) are smarter than us.

dthompson
Member #5,749
April 2005
avatar

Audric said:

Everything I've read about optimizing indicates that the people who design CPUs (and compilers) are smarter than us.

When I started out programming, I can remember thinking that in order to thoroughly optimise my code, I'd have to pre-compute all of my constants or they'd be continually re-evaluated at runtime. So I'd write stuff like:

#define SCREEN_WIDTH  320
#define SCREEN_HEIGHT 200
#define PLAYER_WIDTH  15
#define PLAYER_HEIGHT 20

#define PLAYER_MAX_X  304
#define PLAYER_MAX_Y  179

...rather than defining the last two in a more readable way, that even a 90s compiler wouldn't hesitate to evaluate at compile-time:

#define PLAYER_MAX_X  (SCREEN_WIDTH  - PLAYER_WIDTH  - 1)
#define PLAYER_MAX_Y  (SCREEN_HEIGHT - PLAYER_HEIGHT - 1)

______________________________________________________
Website. It was freakdesign.bafsoft.net.
This isn't a game!

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

#include <cmath>/// for M_PI

const double RAD_TO_DEG = 180.0/M_PI;
const double DEG_TO_RAD = M_PI/180.0;

double circle_radians = 2.0*M_PI;
double circle_degrees = circle_radians*RAD_TO_DEG;

That's better.

dthompson said:

When I started out programming, I can remember thinking that in order to thoroughly optimise my code, I'd have to pre-compute all of my constants or they'd be continually re-evaluated at runtime. So I'd write stuff like:

It depends how you declare them. The compiler can't always optimize things away unless you mark them as const, but it still does a good job at doing so.

Neil Roy
Member #2,229
April 2002
avatar

Audric said:

and the difference can even be zero if speculative execution can perform the whole computation

Intel CPUs Affected By Yet Another Speculative Execution Flaw

I wouldn't count on that working. ;)

In one of my projects (my City3D) I use...

#define RAD2DEG 57.295780f          // Multiply by this to convert radians to degrees

...I just straight up done the math and inserted the result of 180/PI in there. No guessing as to whether it will do the math with that! ;)

---
“I love you too.” - last words of Wanda Roy

relpatseht
Member #5,034
September 2004
avatar

https://jsperf.com/diamond-angle-vs-atan2/2

Generally faster. More numerically stable.

dthompson
Member #5,749
April 2005
avatar

It depends how you declare them.

const wasn't a thing back when I started C. Is it now generally preferred over #defined constants?

EDIT StackOverflow, amirite? Most of the answers I've seen say that they're pretty useful (though even moreso in C++ than C). Even enums are suggested in this answer, yet this one makes an interesting case against const.

______________________________________________________
Website. It was freakdesign.bafsoft.net.
This isn't a game!

Audric
Member #907
January 2001

I think const fields are probably going to be favored in C++, where there are classes and namespaces so you have the means to "store" a constant in the most logical (and thus user-friendly) context. :
if (sound:Volume < sound:MaxVolume) sound:IncreaseVolume();

In C you won't have these facilities, so constant values will be in the global namespace. I rarely see numeric consts in C codebase, when I do it's generally because the constant is actually stored in the module, and thus it can change by just recompiling this module and re-linking with the others :

// items.h
extern T_ITEM items[]; // This array has items_count elements
extern const int items_count;

// items.c
T_ITEM items[] = {
{"sword", NULL},
{"axe", NULL},
};
// automatic count
const int items_count = sizeof(items) / sizeof(items[0]);

You can add and remove items in the array, items_count gets updated automatically, and you don't have to recompile every file that #include "items.h"

dthompson
Member #5,749
April 2005
avatar

I've found that the (implicit) scoping with #defines by compilation unit - ie. putting 'private' constants in .c files, while 'public' constants go in .h files - works fine in most cases. Though it's obviously not as readable as C++'s proper namespacing.

______________________________________________________
Website. It was freakdesign.bafsoft.net.
This isn't a game!

Kitty Cat
Member #2,815
October 2002
avatar

C++ has constexptr, which forces a variable to be treated as a constant expression. So if you have constexpr float rad2deg = 180.0/M_PI;, then rad2deg is a constant expression equal to (float)(180.0/M_PI).

C unfortunately doesn't have this. Generally, a static const will work okay, since if you have optimizations enabled the compiler will know it can't be modified, be able to see the value at compile time, then substitute use of the variable with the value directly, except in cases where the language requires a true constant expression. If you need a true constant expression, or want to share it between sources in a header, a macro will typically be better.

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

Chris Katko
Member #1,881
January 2002
avatar

D also constant'ifies literally anything it can, and allows (nested!) function calls to be included as long as they can be computed at compile-time. Newest C++ finally adds what D had for a decade.

But this is an incredible amount of discussion on something so meaningless. :P One extra multiply will not affect anything unless it's actually tens of thousands of them a second in a super tight loop like a particle system with tens of thousands of elements (or preferably, way more). The second you make a virtual function call or allocated a piece of memory, you've blown any affect a multiply would have.

Moreover, CPU's regularly shuffle the order of code around, and cache branch results to the point that you can't even guarantee any particular line of code will improve without actual empirical testing.

So yeah, it's a good thing to know that specifying constants help stupid compilers realize they're stupid (or is the flaw more in the lack of expressiveness in the language? /digression), and it's a good "trick" to have in your "bag of tricks" when you're doing an optimizing path. The actual practical usefulness of this is pretty low.

At the end of the day, use whatever code that makes you the MOST EFFICIENT at producing working, maintainable code that gets out the door. Because 100% of the optimized code that never ships... doesn't matter.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

bamccaig
Member #7,536
July 2006
avatar

While I agree in general with the conclusion of Chris' post, I have to say that if you repeat yourself then you're neglecting the DRY principle. There's no justifiable reason to repeat yourself. I see it constantly in real world code by colleagues and it makes me sick. Business software is often OO just because, and you will routinely see blocks of code that are dereferencing the same nested properties/objects over and over again. Maybe a sophisticated compiler will see that and optimize it, but honestly it shouldn't have to. It's far more readable if the human stores a pointer or reference along the way for a nested object/struct that is accessed more than once. And instead of repeatedly instructing the computer to follow pointer after pointer you can just go straight to the pointer that you're interested in for a negligible stack allocation. This kind of copy/pasta repeated code drives me insane.

Ultimately, algorithms are king. And where performance is key you need to be smarter than I about data structures. Worrying about it at the CPU level is kind of missing the point unless you know what CPUs are going to be used. But you can try to optimize for the systems you are targeting if it matters. Otherwise, DRY, offload what you can to the compiler, and save yourself the effort when you can afford to. A skilled "engineer's" time is worth way more than machine time. Depending on the problem it's probably more economical to just throw more hardware at it versus hiring a wizard to make the most of the hardware you have. And if you are legit part of team that is worth spending the human time to make it fast then you probably won't need any advice from here.

I applaud Neil for defining a constant instead of repeating himself. And I facepalm for everybody that didn't think of it because it's the simplest thing you can possibly achieve in a program so what the Hell else are you doing wrong if you don't?

Audric
Member #907
January 2001

bamccaig said:

It's far more readable if the human stores a pointer or reference along the way for a nested object/struct that is accessed more than once.

I do agree in cases where there it makes the code shorter

format(currentSite.Domain.DomainGroup.Category.Order, 
       currentSite.Domain.DomainGroup.Category.Name, 
       currentSite.Domain.DomainGroup.Category.Type);
//
category = currentSite.Domain.DomainGroup.Category;
format(category.Order, category.Name, category.Type);

VB.NET even has a nice language construct for this

Using currentSite.Domain.DomainGroup.Category;
   format(.Order, .Name, .Type);
End Using

But as a strict rule ?

// I really don't think the following:
format(currentSite.Domain.Type, currentSite.Domain.Name, nextSite.Domain.Type, nextSite.Domain.Name);
// Should be rewritten as
Domain domainOfTheCurrentSite = currentSite.Domain;
Domain domainOfTheNextSite = nextSite.Domain;
format(domainOfTheCurrentSite.Type, domainOfTheCurrentSite.Name, domainOfTheNextSite.Type, domainOfTheNextSite.Name);

 1   2 


Go to: