Allegro.cc - Online Community

Allegro.cc Forums » Game Design & Concepts » Why I write games in C

This thread is locked; no one can reply to it. rss feed Print
 1   2   3 
Why I write games in C
princeofspace
Member #11,874
April 2010
avatar

This may be controversial, but this is something I've turned over in my brain for a while.

I have no great aversion to C++, and if someone is willing to pay me to write code with, say, C++14, that is fine. I've considered switching to C++ as my primary tool lately because of Microsoft's unwillingness to support a proper C compiler.

However, there's a few significant reasons why I'm struggling with moving over to the other side -- primarily, because C makes me a better programmer.

Take, for example, mixing code with variable declarations. If you're thinking of putting variable declarations deep into a nested function, it should serve as a warning; it's time to consider breaking off a complicated function into a simpler one. It's cleaner organization to put the data first, followed by the operations on the data.

This practice encourages a high number of functions, which, in turn, promotes better readability, if said functions are unambiguously named and kept short. If you're worried about the overhead of a function call, don't be -- short, small "leaf" functions are easily inlined by any modern compiler.

Of course, any of this could be done in C++, but C FORCES me to write in a cleaner style. C enforces the VERB(NOUN) command structure, instead of the OOP paradigm of NOUN->VERB(). A lot has been said about OOP being tacked onto "C with Classes" but I actually think Stroustrup did the best he could here, considering C wasn't really designed with OOP in mind. I just don't like OOP very much. The more I program, the more I find it inhibits readability.

I could go on. But to summarize, writing in C, for me, isn't just about performance. It's about improved readability coupled with excellent library support.

SiegeLord
Member #7,827
October 2006
avatar

I don't really agree with the declarations up-front argument, but I do want to note that despite living in the age of C and C++ replacements, when doing recreational programming you should be using a language that's fun for you to program in. There's definitely a degree of fun that can only be had in C, and realizing that, I continue to contribute to Allegro to enable people such as yourself even if I use different languages myself.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

Chris Katko
Member #1,881
January 2002
avatar

I'm moving toward D, myself. It's got way more constructs for the COMPILER to ENSURE that you're coding according to your own requirements.

For example, modern compilers CAN warn you if you don't check return values.

HOWEVER, with a very simple D template / mixin, I can have functions and have them throw an assert(0, "You messed up on line 124") if you forget to check the return value, AND, have it compile away to zero cost with a single define in release mode. (Of course, this 99% a compile only check, so it shouldn't produce any slower code, but if you want other checks you can do the same thing.) But the cool thing is, I can actually selectively apply it to either certain return values, like an "error_return" type that casts down to an int. Or, I can have it apply to certain marked functions. The sky is the limit.

I'm a complete D newbie and I still managed, in a weekend, to make a template framework that allowed me to attach units of measurement to scalar values. So I could, at compile time, enforce that my physics equations all had the right units. (ala m^2 / j * t) I never worked further on it, but the hardest part was ensuring non-easy calculations worked out. On the otherhand, someone already WROTE a full units framework in D with probably better syntax, testing, and performance.

https://github.com/nordlow/units-d
(another one)

https://github.com/biozic/quantities

But the point is, I'm no genius and I managed to write 90% of "something really cool and useful" which can be used to increase my productivity by reducing common errors.

[edit] ALSO, D supports PURE FUNCTIONS. Which even John Carmack tried to make in Doom 3 because they're so useful. They're functions that can only call other pure functions. Pure functions cannot modify data. They can only input and output. There is no possibility for a side effect (like calling a global function that mutates global state). And no input data structures can change, so you have to create a new one if you want to modify the previous. That "sounds" slow, and in tight loops, sure. HOWEVER, it's also a GODSENT when you combine it with concurrency (you know, where ALL TECHNOLOGY is heading). You don't have to worry about something being mutated while you use it in another thread. It's ensured at the language level. So it forces you to think in a way that's more compatible with proper, concurrency safe, programming architectures and patterns. It also strongly reduces the possible areas for a error, especially in concurrent programs where errors happen randomly when certain threads line up.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

princeofspace
Member #11,874
April 2010
avatar

So it forces you to think in a way that's more compatible with proper, concurrency safe, programming architectures and patterns. It also strongly reduces the possible areas for a error, especially in concurrent programs where errors happen randomly when certain threads line up.

This is an interesting argument, and one that's made me seriously examine languages like Rust. (Though for Rust, the toolchain isn't quite where I'd like it to be yet.)

One of the problems I've had in writing Allegro-based game engines which use concurrency is that I already have more performance than I can use out of plain old C. The major bottleneck is in drawing, but drawing to memory bitmaps is slow compared to video bitmaps, and video bitmaps MUST be in the same thread or performance penalties result. Game updates, even with sophisticated collision checks, are at least 50,000 times faster than rendering one frame, according to my numbers. Not that I have any complaints with Allegro's draw speed -- I'm very easily hitting 60 fps even on older hardware.

In an old language like C, concurrency introduces greater complexity, which I prefer to avoid in my personal projects. D might be an interesting alternative....

SiegeLord said:

There's definitely a degree of fun that can only be had in C, and realizing that, I continue to contribute to Allegro to enable people such as yourself even if I use different languages myself.

Thank you very much for all you do for the community!

Neil Roy
Member #2,229
April 2002
avatar

I only code in C. I have played with C++, but find it is convoluted bloatware that only makes your programs look like a mess and more prone to errors.

No, Microsoft doesn't support C properly. My solution is simple, I don't use Microsoft software. Problem solved! MinGW does support the latest version (the 2011 C standard) so I use it (with "-std=gnu11", so you get the GNU extensions as well).

I just don't see a need for C++, for me anyhow. C is faster and to me, the code looks cleaner and more understandable. The latest C++ standards are even worse and I hate even looking at all those template codes which are even more of a mess.

I use Code::Blocks + MinGW, which has the benefit of also being more cross platform friendly as well. I think it's a bad habit to rely on a Microsoft compiler made for Windows and that isn't totally free (free to use, but if you make something with it where you make money, you have to pay them). No thanks. My Deluxe Pacman 1 & 2 are both created in pure C, compiled with GCC, not a C++ compiler.

I simply see no reason to use anything else. And learning a whole new language because someone else thinks it's better is not a good reason. C works, and it works well. It is very low level, the closest thing to Assembly you will get.

Many may not know this, but there has been a new movement away from Object orientated programming. Newer C++ tutorials don't even use constructors and destructors, which I find hilarious.

There's a good talk by Jack Diederich called "Stop writing classes!" who doesn't speak out against them, but how they are overused, or misused. Personally, I am not a fan of them at all.

video

And this is also a good video to watch called "OOP is the Root of All Evil" by Jeff Ward. This is a REALLY good watch as he goes over some of the problems with object, memory problems like cache hits, big problems with virtual functions which make it impossible for the compiler to optimize etc... etc...

video

---
“I love you too.” - last words of Wanda Roy

beoran
Member #12,636
March 2011

I personallu like to use C for games because I am guaranteed to have good performance, good portability, and relative simplicity. Furthermore, I agree with Neil Roy that C++ is a horribly boated language, so I thoroughly dislike C++. I also invesitaged many other languages, from Rust to Go, to OOC, etc, but I found none to be wholly to my liking. SO I'm just sticking with good plain old C.

But I admit that I do use mruby as my scripting language to help me ease the work somewhat in cases where C is a bit weak (string proceessing, easy scripting, ...)

So no, you're not weird. I think C is cool, and I am realy grateful to SiegeLord, Trent & others who keeps maintaining Allegro so we can keep on using C for our games.

Specter Phoenix
Member #1,425
July 2001
avatar

The old debate that silences questions. Fond memories of seeking advice for C++/Allegro||SDL1.2 and being told to change the language. In recent years you get told to change libraries as well as languages.

princeofspace
Member #11,874
April 2010
avatar

Fond memories of seeking advice for C++/Allegro||SDL1.2 and being told to change the language.

Even as a C die-hard, I don't think I'd tell someone they HAVE to write in C for a personal project. You can pretty near write games in almost any language. I remember scratching out games in QBasic based on text of a book I got from the library written for the Timex Sinclair.

They didn't work great...

Chris Katko
Member #1,881
January 2002
avatar

C really is great for forcing people to understand what's going on. I really can't think of a better "low level" high-level language. But, once you know what's going on, it's not some huge penalty to move to a higher-level language as long as the machine code equivalent is predictable.

I know what a hash function is. I know what a linked list is. I know how trees work. And I know how vtables work. I also know that syscalls are orders-of-a-magnitude slower than most code.

Back in college, I took two SEMESTERS worth of finite element analysis. We took an entire first semester (senior year) just of math behind finite-element analysis and calculated them by hand. It wasn't till grad school, the second class, that we even got to USE the software. Why? Because the software lies. It was basically a year dedicated to how the software lies, and in what ways, and why... down to the mathematical basis for the failure.

99% of people aren't going to be ripping out paper equations in their daily work. But the point was, you should be able to if you need it to verify why the computer model is wrong.

The same applies to programming. I WANT to care more about high-level issues--the work I'm trying to get done--than the implementation. BUT, I want to know--with confidence--what my high-level code is going to end up under-the-hood.

Personally, I think languages haven't gotten high-level enough yet. I have no idea why it's possible to write code, but 99.9% of people and languages don't have a proper means of specifying static and dynamic RULES. Oh yeah, we can turn on a compiler option for warning you if you missed a return value. We can run a static or dynamic checker program that checks for "best practices". By why are we still, in 2017, unable to easily specify our own rules and practices and instead, we have "coding standards" and "review" to ensure those standards are followed. You know what computers are good at? CHECKING CODE VALIDITY. So why don't we have easily available, powerful toolchains for specifying custom, per-project standards on an AST-level? And further, why isn't the entire industry already using it?

I should be able to specify a list of commands that need to be called in order, and flag at compile-time if anyone writes code that fails to follow that order. Created an object but forgot to call an initializer? Forgot to load the TTF addon? Compile error.

(The closest I've ever found were GCC MELT (dear god, look at that webpage gore), and manually forking DScanner to add custom rules. I shouldn't have to learn the entire LISP language just to say, 'make sure X method gets called before Y for any object Z')

Everyone I talk to always skirts around the real issue. They say, "You should design API's so it's impossible to call them out-of-order." and "if coders don't follow guidelines you've got bigger problems." Bullcrap. Programmers are human and make mistakes proportional to the codebase size. The less requirements they have to keep in their head at one time, the less likely they forget one of those things. The more balls you try to juggle at once, the more likely one falls to the ground. And as for the API being impossible to call incorrectly? It must be nice living in perfect-world, where people have unlimited time and resources and there are no other requirements except "programmer can't call incorrectly". Oh look out that window, it's raining exceptions!

Now, with a good language like D (and much lesser extend C++), you can use templates and static if's to enforce rules. But it's the wrong tool for the job. You shouldn't be abusing templates to make SEMANTIC rules from INSIDE the language itself. You wouldn't write a bash script that checks itself for correctness. You should be able to write RULES the same way language designers do. You should be able to write a new attribute called "enforce_return" and stick it in your function signature and the compiler checks the return values for your homemade criteria. (or whatever your rules are.)

(e.g. Don't allow goto EXCEPT in this circumstance? That should be an AUTOMATED rule.)

We're programmers. We're supposed to automate things. But somehow, everyone is so used to the status quo, they automate everything... their VIM/emacs macros... their makefiles... their code... their unit testing... we'll even include vim tab and line-width macros into our files to ensure consistent indentation... but code standards? NO. That's that ONE THING is silly to automate! We'd rather stare at other people's code for hours during code reviews like an accountant without a calculator.

I mean, if you really sit back and let your mind wander at the possibilities, there are tons of useful things we could all do for any medium-sized project or larger. And even with small ones, it would force us to "write properly." Static and dynamic code analyzers already exist with "Best practices." Why not add our own practices to ensure we follow our own rules?

Any FOSS project of size has a coding standard that contributors have to follow to submit a patch. Allegro does. GCC does. The Linux kernel does. Automate it.

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

princeofspace
Member #11,874
April 2010
avatar

I should be able to specify a list of commands that need to be called in order, and flag at compile-time if anyone writes code that fails to follow that order. Created an object but forgot to call an initializer? Forgot to load the TTF addon? Compile error.

I did a lot of thinking about this today at work and when I came home you had made this post! I completely agree.

Take global variables. The problem with global variables isn't that they're global scope -- global scope can be enormously useful in certain contexts -- but that access rules for globals are very poor, at least in any language I've encountered.

Why? I'm not sure. If we're completely honest, static classes (or singletons, depending on which form of OOP you subscribe to) are really just global scope variable(s) in disguise, so they're certainly in production use. In games, typically a class called the Registry is used to solve very specific problems at global scope.

In C, I get partway there by fancying up my header guards a bit. If an interface will be used in one place, and ONLY in one place, I use something like this:

#SelectExpand
1 2 3#ifndef INTERFACE_NAME_H 4#define INTERFACE_NAME_H 5 6[ some interface access functions go here ] 7 8#else 9#error "Interface subsystem should only be called once -- " __FILE__ 10#endif

Now, if I forget and use this header in more than one place, it trips a compiler error. I'll put in comments describing where in the overall game engine these function calls belong.

But it's far from a perfect solution.

Neil Roy
Member #2,229
April 2002
avatar

Even as a C die-hard, I don't think I'd tell someone they HAVE to write in C for a personal project.

I'm the same way. I'll tell you why I love C and prefer it. But I always generally tell people that the best language to program in is the one you enjoy using. Forget what others think honestly. If C++ is what you enjoy, than do it. If D or whatever... go for it. Popular opinion on this matter honestly doesn't count. I can, and will tell you why I like C, but... that doesn't really matter, in the end, you are the one who has to use it.

If you EVER plan to work with a team, you may wish to familiarize yourself with what ever language and possibly coding style they want to use though. I don't plan on it so, C it is for me.

I don't know Chris, I like the freedom the language gives me to do things the way I want. It's what makes it so powerful. If you start bogging it down with rules and stifle creativity, that makes things more difficult. In the end, if you can set your own rules, than you can also make mistakes setting those rules and leave yourself open to even more problems I would think. The simpler the better I think.

---
“I love you too.” - last words of Wanda Roy

bamccaig
Member #7,536
July 2006
avatar

Object-Oriented Programming / Message Passing / Virtual Functions

There is a performance hit for this, but it's negligible and it allows very powerful things to occur. The same code can do different things depending on the type of data that it's operating on. That's huge. You can customize the way the code behaves by defining another type of data to operate on. And all other code that operates on a related class of data continues to work on that data, but might work differently depending on the type of data.

There's enormous value in OOP. The problem isn't OOP. The problem is that idiots with keyboards were set loose trying to figure out what it means to do OOP without really understanding the value in it. The Java world is an excellent example. Where you need 20 classes to achieve 1+1.

Even popular languages like C++ and Java and C# don't fully grasp the value in OOP/message passing. When I first learned an OOP style in a Lisp my eyes were opened to what it could be. In any case, not everything needs to be an object certainly, but at the same time the cost of an object is negligible and there's more flexibility in an object than a static function or POD structure.

Depending on the language sometimes OOP is the best you can do, but if the language supports functions as first-class citizens then it's overkill to make everything an object. A language like C lacks of these features and so it really limits what you can do without getting fancy with macros (yuck) or writing 100x more code (yuck).

For games programming it's probably not as useful since you hack the shit out of it for a few years and then it doesn't change much. Most software in the world doesn't work that way. Instead, you gradually build upon it for decades, and if you aren't doing something to reuse code you'll sooner end up with an unmaintainable mess.

Global Scope

The problem with global scope is absolutely global scope. The code can access it anywhere by definition. This makes it incredibly difficult to understand parts of the code that interact with this global state. If that global state isn't in an expected state then everything can go sour. This complicates testing and parallelism. It also limits the flexibility of code ALA polymorphism. If you're accessing data and code in global space then you cannot easily change the way a module of code works by injecting a differently behaving object or function (without unreliable, ugly hacks that might not even work in parallel). It really is best that you swear off this practice until you find one of those 0.001% of cases where it's actually better.

Kitty Cat
Member #2,815
October 2002
avatar

bamccaig said:

Object-Oriented Programming / Message Passing / Virtual Functions
There is a performance hit for this, but it's negligible and it allows very powerful things to occur.

That really depends on how you use it. If you're calling virtual functions hundreds or thousands of times per frame (e.g. a virtual update() method that all object types override, and you have thousands of active objects that have that called in a loop), it can be non-negligible. And the worst part is no profiler will tell you it's a problem because of what's wrong: it's throwing away a bunch of memory you've been given "for free" and causing excessive intermittent cache misses. There is no instruction or function that will be slow that it can point to, you're just causing spurious slowdowns in random places as it keeps stalling caches.

In regards to object-oriented programming, you just need to be careful on how you think about objects. The most important thing is to make sure you structure your objects efficiently (thinking in-game entity = in-code object, for instance, is a big folly). If you have float Health; and Vector3 Pos;, but you never access Health at the same time as Pos, they probably shouldn't be in the same structure, or otherwise near each other in memory. You should go through data sequentially (and remember too: code is data), so avoid having objects defined, allocated, or processed in a way that necessitate skipping around in memory.

On certain systems, inefficient memory use can be a real killer. Especially on modern systems where CPUs are orders of magnitude faster than memory. It's very easy to make the performance impact non-negligible because of these things and never know it.

--
"Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham

princeofspace
Member #11,874
April 2010
avatar

I'm gonna invoke Steve McConnell from Code Complete here:

Quote:

Used with discipline, global variables are useful in several situations

He goes on to describe how most programmers get around use of global scope: by making a giant class that holds all the data and is passed around EVERYWHERE. This is global scope in disguise; as McConnell says, it holds to the letter of the law, but misses the benefits of encapsulation.

Actually, if you really need global scope, you should use global scope. Being global isn't of itself evil, but it can easily be overused. Using global scope where necessary avoids the overhead of passing very large structs.

True, there are repercussions where concurrency is concerned, but areas where concurrency is truly useful (IE -- Allegro's audio system) are unlikely to benefit from global scope anyway.

bamccaig
Member #7,536
July 2006
avatar

It matters more for software that is large and maintained for decades. A game that is written once and never touched again can have all of the worst practices, shortcuts, etc., you want. It just needs to work well enough to play the game. The behind the scenes bugs that the player doesn't notice or can workaround don't really matter much.

That said, I imagine that some of those AAA titles that are riddled with bugs are in that state because the code was written sloppy and it quickly became unmanageable. I can't remember the title anymore, but I think it was PS3 game. I probably downloaded it from the PlayStation Store. It was completely unplayable. I'd constantly just get stuck on nothing. Skyrim and other Bethesda games are also notorious for their countless bugs, but perhaps that's a necessary evil for such a large game to come to fruition and still be profitable.

A one-man or 5-man indie game probably won't suffer from that scale and can probably be hacked up with little regard for such practices. That doesn't make global state good, or OOP slow. It just means global state works as long as you wire it up without bugs, and OOP can be too slow under specific circumstances. Most programs in the world are nothing like games. Games have their own set of rules. There are times to use best practices and times to bend the rules to get the performance that you need from the machines (but the machines of tomorrow will probably be bigger and faster, so all of that hard work might be obsolete in a few years).

He goes on to describe how most programmers get around use of global scope: by making a giant class that holds all the data and is passed around EVERYWHERE. This is global scope in disguise; as McConnell says, it holds to the letter of the law, but misses the benefits of encapsulation.

Actually, if you really need global scope, you should use global scope. Being global isn't of itself evil, but it can easily be overused. Using global scope where necessary avoids the overhead of passing very large structs.

To avoid passing large structs you can pass a pointer (or a reference in C++) to one. If your program's state is in a large structure hierarchy on the heap with a root pointer in main then you can still pass pieces (e.g., state->resourceManager) of it into functions to manipulate that part of the program state without having access to other parts.

If all of the program state is available in global space then it becomes easy for parts of the code that have no business seeing or touching state doing so. Global state means you don't have to think anymore about how to organize the program because you can always just touch whatever you want from anywhere. This can lead to sloppy designs, and force even worse practices down the road to avoid spending countless days refactoring the resulting spaghetti code.

Global state isn't just bad for concurrency. It's lazy program design, and that can lead to sloppy code, and other problems.

princeofspace
Member #11,874
April 2010
avatar

There's a key difference between giving all parts of the program absolute power over global state and creating a few bits of data in global scope. I'd argue that passing a reference to program state in each and every function is equally lazy, and secretly poor design.

In truth, most software has some form of global state. Allegro programs, for example, must call al_init at or near startup. You've just set the global state of a program.

Is this bad design? No. Good design manages global scope parts of the code with other data. Blaming the bad design of games on global scope only begets more bad design, because we don't get to the root of the problem.

An example I might offer would be John Carmack's early designs shown in the open source releases of old id software games. Few would accuse those games of being poorly constructed, but they use limited global scope expertly to accomplish various goals.

bamccaig
Member #7,536
July 2006
avatar

It is scary to look in some old code that hasn’t been touched in years, and see a completely non-thread-safe static global variable used

I don't think he's much of a proponent for abusing globals. I've said again and again there are some rare uses for them, but if you're arguing with me about that then you must be arguing that globals are usually a good idea.

Edgar Reynaldo
Major Reynaldo
May 2007
avatar

Why I don't use C if I can't help it ;

Manual polymorphism sucks.

It doesn't have operator and function overloading.

Templates are handy in a pinch.

Namespaces.

I've been programming in Python lately, and it's a refreshing change from all the tedious manual crap you have to do in C and C++.

princeofspace
Member #11,874
April 2010
avatar

bamccaig said:

It really is best that you swear off this practice until you find one of those 0.001% of cases where it's actually better.

I'm not sure if global scope is usually a good idea, but I'm convinced now it's useful in more than 0.001% of all cases. I use hidden globals in about half my projects, because I've done it other ways and it's the most elegant and intuitive tool for that particular job, in that code. It's not the least bit more difficult to debug, if you use a debugger with watchpoints.

The old id software games are an example, and I'd also offer up Plan 9 from Bell Labs. Excellent code, and brilliantly designed, by some of the best engineers who ever wrote software. I learned a lot about design reading the Plan 9 code.

Later on in that tweet series you quoted, Carmack (apparently) says:

Quote:

@Jonathan_Blow yes, I would like to be able to require a #pragma on a per file basis to allow any static variables.

I think this is a reasonable practice and would like to see it used in modern C compilers. I would also like to see all global variables static by default -- there is usually no reason for the compiler to export the symbol.

bamccaig
Member #7,536
July 2006
avatar

Even a static (i.e., internal) "global" is better than an external global. At least you know that can only be accessed directly by that code file. It's a bit like a class variable at that point. I wouldn't even really call that a global variable. It's not global. It cannot be accessed by your entire program. It's a static variable, but that's a distinct concept.

princeofspace
Member #11,874
April 2010
avatar

bamccaig said:

It's a bit like a class variable at that point. I wouldn't even really call that a global variable. It's not global. It cannot be accessed by your entire program. It's a static variable, but that's a distinct concept.

That being the case, let me say I completely agree with your reasoning about true "extern" style globals -- genuine usage cases would be quite rare.

In one recent case I had an extern global array that could be replaced with a switch-case statement. Use of the "default" keyword made my code safer, sort of like using try-catch in python.

Neil Roy
Member #2,229
April 2002
avatar

bamccaig said:

This makes it incredibly difficult to understand parts of the code that interact with this global state. If that global state isn't in an expected state then everything can go sour.

Never had a problem with using globals. I won't use them if I don't have to, but I won't jump through hoops to avoid them either. I find I have more problems when I try avoiding them than just using them as intended. Mainly for variables that will exist through the life of the program and ones I wish to free up the memory when the program exits.

I always hear about the global boogyman and many other issues that never come up. Pointer horror stories, which also has never come up for me. I have more problems with a misplaced, or missing semicolon than I do with anything else.

---
“I love you too.” - last words of Wanda Roy

Peter Hull
Member #1,136
March 2001

int* dest = malloc(item_count * sizeof(int));
int* src = prepare_source();
memcpy(dest, src, item_count * sizeof(int));

princeofspace
Member #11,874
April 2010
avatar

Neil Roy said:

Never had a problem with using globals. I won't use them if I don't have to, but I won't jump through hoops to avoid them either. I find I have more problems when I try avoiding them than just using them as intended. Mainly for variables that will exist through the life of the program and ones I wish to free up the memory when the program exits.

I always hear about the global boogyman and many other issues that never come up. Pointer horror stories, which also has never come up for me. I have more problems with a misplaced, or missing semicolon than I do with anything else.

Using globals, avoiding OOP, sticking strictly to C -- I love anti-establishment coding.

Specter Phoenix
Member #1,425
July 2001
avatar

I remember when I used to have the global variable const bool ANNOY_BAMS = true; just as a personal joke due to his stern view on global variables. He is fine with constant global variables though since they can't be changed after declaration. Just having it made me smile when I saw it, well back when I bothered coding.

 1   2   3 


Go to: