Skip to content

virtual insanity

I’m trying to remain enthusiastic about C++. I’ve been pushing through the books, trying to gain enough fluency to start pushing forward on a real project. Every once in a while I’d run across something weird, like the necessity of explicitly writing a copy constructor and overloading the assignment operator of any non-trivial class, but those things seemed like quirks that would be easy enough to live with.

Then I got to the part about the virtual keyword.

In every other object-oriented language I’ve ever learned, complete support for polymorphism was just assumed. If you subclass some class and override some of its methods, then naturally whenever you call those methods on an instance of your subclass, you get your overrides. I can’t even describe this without sounding tautological, because all the terminology in OOP just assumes that things work this way. Polymorphism is such a fundamental concept that I never really considered that a language would call itself Object Oriented without supporting it inherently.

Then I got to C++. Consider this code:

class A {
     void method();

class B : public A {
     void method();

void demo_fail();

void A::method() {
     std::cout << "    A::method() called\n";

void B::method() {
     std::cout << "    B::method() called\n";

void demo_fail() {
     A *a = new A();
     A *b = new B();

     std::cout << "Calling a->method()...\n";
     std::cout << "Calling b->method()...\n";
     std::cout << "\n...What?\n";

You’d expect the above code to happily call A’s method, then B’s. Instead, the output looks like this:

this is not the expected output

It turns out that C++ does support polymorphism; it’s just that someone decided not to enable it by default. The virtual keyword is the key here; by adding it to the prototype of the method of the superclass, things start working again:

class Alpha {
     virtual void method();

class Bravo : public Alpha {
     void method();

void demo_succeed();

void Alpha::method() {
     std::cout << "    Alpha::method() called\n";

void Bravo::method() {
     std::cout << "    Bravo::method() called\n";

void demo_succeed() {
     Alpha *alpha = new Alpha();
     Alpha *bravo = new Bravo();

     std::cout << "Calling alpha->method()...\n";
     std::cout << "Calling bravo->method()...\n";
     std::cout << "\nThe |virtual| keyword is essential.\n";

That is more expected

The problem here isn’t so much that you need an arbitrary keyword to enable one of the fundamentals of OOP; it’s where that keyword has to go. Useful programming in the modern era depends on libraries of code that the individual coder doesn’t need, or want, to maintain. One of the nice things about OOP is that you can use a library as long as it’s useful, and even a bit longer: just subclass and extend until you have all the functionality you need. In C++, that only works if the library author has correctly anticipated exactly which functions you will need to override. In practice, this means that the author probably just stuck the virtual keyword onto every public and protected function, neatly negating the tiny compiler optimization that was the whole point of not making the ‘virtual’ behavior the default.

I have to admit that I am starting to dread getting to the section on exceptions. According to a friend of mine, C++ exceptions are hacktastic.

RSS feed


Comment by Yev Windows XP Mozilla Firefox Subscribed to comments via email
2008-06-28 23:59:15

You’ll find the same insanity in C#, which was created after Java, so I’m not sure if it makes any language less modern. There is actually a very good reason for this behavior, because otherwise, one can do ungodly things like this:

public class MyInnocentClass {
public void doSomethingInnocent() {
System.out.println(“All is well!”);

If I’m a user of the above class, I can be pretty much assured, that that class will, well, do innocent stuff. It presents a contract of a sort, an expectation of how it will behave. So if I then have a method

public void doStuff(MyInnocentClass c){

I should be pretty confident in what that method will do. But of course, someone can always extend MyInnocentClass, and change doSomethingInnocent to blow up my computer instead. Now, my code will have to deal with behavior that is entirely unexpected. And as a result, the behavior of my own code will be non-deterministic. So if every method of every class is virtual, you can never guarantee predictable behavior, because somewhere in your call stack, a method might do what you don’t expect, because it’s been modified in a subclass of a parameter. So I’m perfectly happy to define some methods virtual explicitly – because it means that I place no expectaions on what exactly those methods will do, and will not rely on them producing any one kind of behavior.

Comment by coriolinus Windows XP Mozilla Firefox 3.0 Subscribed to comments via email
2008-06-29 03:01:17

In Java, there’s the final keyword. The equivalent in C# is sealed. If you want to ensure that no subclass redefines your method’s behavior, you have that option.

The problem with making non-virtual functions the default is simply that the default should be reasonable. I had to read through 600 pages of introductory C++ reference before I got to a part that even mentioned the virtual keyword; if I had tried to just dive in and start coding, I probably would have had more than a few hours of weird, inexplicable, head-banging bugs the first time I tried writing something significant.

Also, the only case in which your code needs to deal with objects that you can’t review the source of is when you’re calling library code. Presumably, you trust the authors of the libraries that you use. If someone else uses your code as a library, and they override doSomethingInnocent to blow up your computer, that’s a bug in their code, not yours. Anyone could write a subclass of list whose size() method works, and also reformats your hard drive; it’s just that nobody would use it.

Comment by Yev Windows Vista Mozilla Firefox 3.0 Subscribed to comments via email
2008-06-29 10:30:09

“sealed” prevents the inheretance of the whole class. That prevents you from doing things like the template method pattern with default behavior, in which you want to firmly define the gist of class behavior, but leave the rest to subclasses.

The fact that you had to read so much of a reference before getting to the “virtual” keyword, is probably more a fault of the reference. It’s really a fairly basic thing, and usually dealt with early on.

I actually didn’t have library code in mind. If you’re dealing with black box code, static inheritance keywords won’t provide much security. I was thinking more of some imbecille coworker. For instance, let’s say I write a structure with an Add keyword that adds things to the beginning of a sequence. I accept one as an argument to my method, and work based on that assumption. An imbecille coworker subclasses my code, but his Add method adds to the end. The problem is, that when implementing my method, I don’t know WHOSE code I’ll be using. Now, if I wanted to make my code generic, my parameter should have been an interface or an abstract class – then I would know that the only expectation my code has is what is documented in the interface. But when calling methods on an implementation, there’s still that implicit assumption that you know which code you’re calling. That’s why you still can create subclasses that override non-vitual methods. You just can’t call those methods when you’re treating your subclass as its base class.

I’ve had to switch from Java to C# a little under a year ago – and really, after the first week or two, it hasn’t been a problem. If anything, it’s forced me to think better about my design and architecture – where I expect customized behavior, and where I don’t.


Sorry, the comment form is closed at this time.