The only place where it's "slower", is doing lots of per-pixel operations. But if you do them in memory like before, they're just as fast. It's just that per-pixel modifications on videocard RAM completely swamps the bus on modern computer.
But if you have a huge existing codebase with heavily-coupling to A4, A4 might be the best bet just because changing any API takes time and bugs. But for everything else, there's really reason to keep A4 over A5 except stubbornness or laziness.
If you were to use any modern SDL, SFML, or anything, they're all designed like A5. A4 originated in the Amiga and DOS days and the API shows it. You don't need assembler and compiled sprites to draw bitmaps fast. You simply tell the GPU "draw this bitmap at x,y".
It's really fast to a GPU to "do X", but telling the GPU "do x" becomes a problem when "x" is really small and really numerous. It takes Y amount of time to send a command over the bus to the GPU, and that overhead multiplied by a tiny operation like "change a single pixel" completely swamps the bus. While not exactly true, to illustrate the point (ha, play on words!->), it takes just as long for a modern GPU to draw an entire sprite with rotation, color, and lighting, that it does to draw a single pixel. So, draw your sprites ONCE in every animation (using those free gigs of VRAM) and then just draw the sprites.
The modern way of programming graphics (and every video game by a AAA company is this way since like 1998) is that you exploit that a GPU can do macro operations, and, if you need very specific, custom per-pixel operations (other than lighting, coloring, which is basically FREE on GPUs) you build a shader for it. A shader is just a little program that takes few variables, changes them, and outputs a result. So instead of manually making a per-pixel procedurally generated fire, you put that code directly onto the video card/GPU and boom, it's literally 1000x, 10000x or more times faster than a CPU ever was and it scales with GPU upgrades with no code changes. (More shader runs, simply get divided up against more shader units. And likewise with smaller GPUs.)
So basically, unless you're doing something really wrong by completely ignoring how hardware is designed, A5 (and any OpenGL powered program) is gonna be balls-out fast for a 2-D game. Like, literally unimaginably faster than any DOS-era, CPU-powered blitting program that we grew up with. If the new way is slow, you're either making a mistake, or, you're forgetting that you're using way higher resolutions, color depths, #'s of sprites, etc than you ever did before. Even 3-D games push unimaginable amounts of polygons. Even my years-old netbook with integrated intel graphics that runs on less than 10 watts of power, can push enough polygons to hit at least 2002-era AAA-game graphics--maybe higher.
 Here's Battlefield 3 running on the same freakin' CPU as my chromebook.
My exact laptop (I think) running at 800x600 ... GTA V:
Now, there is one huge more trick. They're running custom engines that all use deferred rendering. It's what all the modern AAA games that can afford a dedicated graphics architect us. Any game that doesn't (99% of indie games, even 3-D) will run much slower because they can't afford to tweak the graphics pipeline to get the 100% pure exploitation of all GPU resources.
Want to know how modern graphics engines work with deferred rendering? Check out these amazing analysis articles:
Considering the heavy streaming of assets going on and the specs of the PS3 (256MB RAM and 256MB of video memory) I’m amazed the game doesn’t crash after 20 minutes, it’s a real technical prowess.
Deus Ex HR: http://www.adriancourreges.com/blog/2015/03/10/deus-ex-human-revolution-graphics-study/