Category Archives: Uncategorized

Vtables…a quick look

I’m sure it’s all a term we’ve heard in our C++ lives right? But what are vtables? What do they give us in C++?

Well, they’re kind of a big deal actually.

First of all, as you’ve probably guessed, it stands for Virtual Table.  But secondly, and perhaps more importantly, without vtables, we wouldn’t have runtime-polymorphism available to us in C++, as all the references to the functions would be bound at compile time.

So what is a Vtable?

Well, it’s a table of function pointers.  Each object that has a virtual function, has a vtable pointer, which points to the virtual table for objects of its type.

Let’s have a look at some code.


struct baseStruct
    virtual void createWidgets() {}

struct derivedStruct: public baseStruct
    virtual void createWidgets() {}

void makeWidgets(baseStruct * widget)

int main()
    derivedStruct d;

The widget pointer in makeWidgets is pointing to a baseStruct object. At run time, the code should call derivedStruct::createWidgets since baseStruct::createWidgets() is virtual.

It only needs to look up the entry for baseStruct::createWidgets() in the virtual table, and then calls teh function the entry in the vtable is pointing too.

At compilation time, the compiler can’t know which code is to be executed by widget->createWidgets() call since it’s not known at compile time what widget points too.

I’m still learning myself, but this is very much a quick overview of what a vtable is.

Happy coding


Looking under the hood….

Over the last year now, I’ve been attending the UK BSi C++ Panel meetings in London, and it’s been an eye opening experience. I must confess that often I sit there listening, and taking lots of notes of all the stuff I want to look up when I get home, (it’s a long list…) But I was having a chat with a friend of mine, to whom I mentioned “I want to get to look under the hood of the language, so I can follow these disucssions…”

The glint in his eye should have been my cue to run away, but I didn’t see it, so eager I was to learn more, so he gave me some helpful hints as to where to start, and that’s how this blog post came about.  I’m going to be writing about Reference/Value semantics.

Let me say right now, I’m not a deep expert, this article is the result of a lot of reading, and trying to get my own head around it all.

C++ has value semantics by default, whereas other languages such as Java, Python etc have reference semantics by default. This clearly marks C++ as a different breed of programming language, which also raises some interesting questions (which my new mentor challenged me to think about…):

  • What does it mean to have value semantics by default?
  • How does this make a difference in C++?
  • How does this make C++ different to other languages?
  • What are the implications of these differences in regards to:
    • Performance?
    • Memory usage and allocation?
    • Resource management?

What does it mean to have value semantics by default?

Maybe a good starting point would be “What does value semantics mean?”  In its simplest terms, value semantics is a term used to describe a programming language that is primarily concerned about the value of an object, rather than the object itself.  The objects are used to denote values, we don’t really care about the identity of the object in such a programming language.

Now, it’s important to note that when I speak of objects in C++, I don’t mean the Java or Python definition of an object, which is something that’s an instance of a class, and has methods and such. In C++, an object is a piece of memory that has:

  • an address (@0FC349 for example)
  • a type (int)
  • capable of storing a value (42)

This leads to another important factoid to consider. Because the sequence of bits stored in memory can be interpreted differently depending on its type. For example the binary value in memory of 10000001 can be seen as 65 if the type is a short int, yet in can also be interpreted as ‘A’ if the type is a char.

Now it’s important to note that C++ has value semantics by default. That is to say, there are no keywords or special symbols you need to use, to tell the language that you’re using value semantics.

Consider the following code snippet:

x = y;

What’s going on here? Well the = isn’t an equality operator in C++, it’s an assignment operator. And in this context, the value of y is being copied to x.

But X isn’t a value, but an object. So Why isn’t X a value? Well, it’s because it can be 1 at one moment, 45 at another time. So if we want to know the value of X, we’d need to query the address X is held in to get that value.

So why did C++ go down the route of having value semantics over reference semantics?

  • Allocating on the stack is faster than allocating on the heap.
  • Local values are good for cache locality. If C++ had no value semantics, it wouldn’t be possible to have a contiguous std::vector, you’d simply have an array of pointers, which would lead to memory fragmentation.

Why use value semantics then?

You get value semantics by default in C++, but you need to make a specific effort to use reference semantics in C++ by adding a reference or pointer type symbol (&, *)

Using value semantics we don’t run in to memory management issues such as :

  • No dangling references to a non-existent object
  • No expensive and unnecessary free store allocations.
  • No memory leaks.
  • No smart/dumb pointers

It also helps to avoid reference aliasing issues in multi-threaded environments.  Passing by value and ensuring each thread has its own cope of the value helps to prevent data races.

You also don’t need to synchronise on such values, and the programs run faster, and safer as you avoid deadlocks.

It’s also beneficial for referential transparency. This means that we get shocks or surprises when a value is changed behind the scenes.

And using Pass by value is often safer than pass by reference, because you cannot accidentally modify the parameters to your method/function. This makes the language simpler to use, since you don’t have to worry about the variables you pass to a function, as you know they won’t be changed and this is often what’s expected.

Then when do we use Reference Semantics?

We use reference semantics when something has to be in the same location in memory each time.  A good example of this would be something like std::cout or any such global.

You also use reference semantics when you want to modify the value you’re passing to your function, and this is made explicit in C++ by passing a reference pointer to your function.


void foo::do_something(int & some_value) {

This is just a starter for 10 type article, I will go deeper in to this as time goes on and I learn more 🙂

In the mean time, happy coding.


In The Beginning Was The Command Line…

More years ago than I care to admit, a friend of mine loaned me a copy of Neal Stephenson’s In The Beginning Was The Command Line, and miracle of miracles, I’ve finally managed to finish reading it.  (Sorry Chris!)   It really got me thinking about how I use a computer.

It’s a really thought provoking book.  Given it was written in 1999, it was surprising to me, to see how little has changed in one sense in the way that we use our computers. When Stephenson was writing this book, Mac OSx was on the design board, given that Apple had bought out NeXT back then.  Microsoft was still pushing Windows 95, 98 and Windows NT, which as we’re all aware are UI based operating systems.

And he makes the point that the OS software is no longer letting the user interact directly with the computer, but rather it decodes what the user wants to do, and issues the right calls, to make sure that it happens.

I for one am not a fan of that, as a programmer, I want to know EXACTLY what my computer is doing.  (The irony is not lost on me as I type this blog post up in the WordPress desktop application)  And I’m sure I’m not alone in that the first experience of a computer I had didn’t involve a window, but rather a flashing prompt, wating for a command to be entered, or in the case of my ZX Spectrum, a BASIC keyword to be inserted.

And I for one really miss that.  Don’t get me wrong, GUI’s have their purposes, but as Stephenson says in his essay, it puts different layers of abstraction between the user and the hardware on the machine. And it can be both a blessing and a curse.

I code mainly in Linux, and therefore I have a UI editor, which at the moment is Visual Studio Code for my C++/Node.JS stuff, or IntelliJ if I’m doing stuff with Java, but I’ll always, and I do mean always, have a terminal window open.  Because frankly it’s a lot faster for me that trying to find the right option in the UI.  The only reason I use an IDE these days, is that it’s faster to navigate across the project structure, rather than remembering which directory a certain file lives in. So both can live in perfect harmony.

But the command line can also have danger for the uninitated as well.  As I found out to my cost in the early days of using my first PC.  I was in the process of formatting some floppy disks, or at least I was trying to, when I noticed that the drive wasn’t making any noise, but the hard disk light was flashing quite a bit.  I hit control-C, and found that I’d blatted most of my hard disk! (A mighty 40MB back then…)  I thought I’d entered the correct command, but clearly I hadn’t, and so lost about a year’s worth of…..well….accumulated Shareware games…(not a great loss), but I’d also wiped out half of my operating system, which was MS-DOS 4.2.

While there’s dangers in misusing the command line, I also agree with Stephenson that OS companies are doing their users a disservice by hiding commonly used tools in some sub-menu of an application.

He cites the UNIX command wc as an example.  In UNIX, wc gives you the number of words/characters in a file (depending on the arguments you pass it.)  So to get the word count of a file you’d do:

wc -w <filename>

So if I had a file called greeting.txt with the text Hello World in it, the command would return:

emyrw@lothal:~/temp$ wc -w greeting.txt 
2 greeting.txt

Now, if I tried to find that in Word or something like that (although I know there’s a word count on the bottom bar of the application) then you’d need to a) know where to look, or b) know which sub-menu the word count lives.

When I was at university, I wrote my dissertation on an installation of Linux (I think it was Gentoo) and I had a terminal window open, and every time I wanted a word count, I ran the wc command.  For me it was much faster than trying to remember a menu somewhere.

Now I totally get it, not everyone wants to interact with a computer using a command line interface, but I hope that it’s not consigned to history as something old-hat.  (I’ve had someone say that to me once…)  The command line is one of the most useful things to learn to use. If your OS goes bang, then if you know how to navigate around using the command line, and which files to edit, then you have a chance of recovering your system.

If you’ve not read Neal Stephenson’s book, I can’t recommend it enough.

Let it go….let it go…

I’ve not become a fan of Frozen I promise.

During the last few weeks, I’ve been reading a LOT of code. Mainly other people’s code, and something struck me.  There was a LOT of commented out code. And this code was checked in to version control too.

I may be approaching this from a simplistic point of view here, but I can’t fathom why people comment out code, and then check that code in.  I’ve heard a variety of reasons, some of them good, some of them, not so good.

But I think I’ve nailed it down to this. Fear!

There’s a fear that we may need the code again at some point in the future.

However I would counter that with the following. If it’s not needed right now, then why is it in the code base? And more importantly, why is it checked in?

Commented out code CAN be useful such as providing an example of using a complex API or such. However in this instance it was code that wasn’t going to be executed as the developer had found a better way of doing it.

So it begs the question, why is the code still there?

I can understand if we weren’t using version control, or if the server was flaky etc, but the servers are fairly robust these days, and besides there’s redundancies as well.  So again, why is it there?

I would say that we should be merciless with commented out code. I’ll confess that I am in code reviews. It not only distracts the developer maintaining the code, but also disrupts the flow of the code when you’re reading it too.

We shouldn’t be afraid of removing commented out code from projects.  That’s what version control is for.  If we need that code again, we can easily get to where the code existed before being removed.

Also, we shouldn’t be afraid of removing code that’s no longer in use!  I did exactly that a few weeks.  I was working on a legacy product, and there was a section in the project that I didn’t think was being executed.

I grepped for the class names, and found they weren’t being used anywhere else. Once I’d done that, I removed their entries in the Maven pom files and tried to compile.  It compiled fine without issue, and I tried to run the product, and that ran fine too.  So after that I removed the directories and their contents.  Repeated the build and execute steps and functionality wasn’t compromised.

Sometimes the best things we can do for our code, is to actually remove code. Whether it be commented out code, or code that’s never actually executed.


ACCU Tutorial Day 1 Review

So it’s April, which means the ACCU Annual Conference is here, and it was off to Bristol with me to attend the pre-conference tutorial. The original tutorial I’d registered for had been pulled, as the speaker (name here) has been seriously, so I wish him a speedy recovery and hope to hear his tutorial next year if he gives it. So I had to change my tutorial to attend, so I chose Kevlin Henney’s Raw TDD.

It was an excellent talk, the first portion Kevlin presented on the various facets that make TDD what it is, as well as define what TDD is and what it isn’t.

After lunch,using Jon Jaggar’s excellent cyber-dojo, we practiced developing using a pure Test Driven Approach. And Kevlin was very strict on this as well. And it was quite a challenge. I must confess I thought what I did at work day to day was TDD, but it turns out I have GUTs, (Good Unit Tests) rather than use a pure TDD approach.

After we’d switched coding partners a couple of times, we started to write our own Testing Framework. Now there are many excellent frameworks out there, so Googlecode has nothing to worry about at the moment, but it was awesome to see how relatively easy (albeit with a fair ammount of knowledge) it is to write your own unit testing framework.  Ours was based off the assert test and we built upon it.

All in all, it was a very enjoyable day of learning, and as ever Kevlin’s style of delivery was it’s usual energetic, engaging and enthusiastic self.

As a side note…

If you’ve never been to the ACCU Conference before, I’d strongly recommend going. It’s a world-wide gathering of C++ programmers, and in the past we’ve had talks given by Bjarne Stroustrup, Scott Meyers, Uncle Bob Martin, Michael Feathers and many others besides.

Also consider joining the ACCU.  It’s a great organisation, and it doesn’t cost all that much to join. And it has an awesome community of people. And if you’re a member you get a sizable discount on the conference, so double bonus (:

Out with the new!

Before I became a C++ developer, I wrote a lot of Java, and I mean a LOT. When I was unemployed I wrote a point of sale system, a stock management system a web servlet app. I wrote pretty much everything in Java back then.

Then I learned C++, and I didn’t know that you could write C++ in a Java fashion.  Indeed my mentor at the time saw some code I wrote and just like a Java developer I put everything in one file within a class.

And wouldn’t you know it, now I’m the mentor and I’m finding my mentee is doing the exact same thing. And some may say why is that bad? It all has to do with a simple word. And that word is, new.

In Java if you wanted to create an object, you’d do something like this:

static void main(String args[])
    Person fred = new Person();

This looks all nice and dandy doesn’t it.  You simply create a new person object, and then forget about it. Now I want to point out that in Java, this IS NOT a bad thing. Java has a special mechanism to deal with this called Garbage Collection, which is something that runs periodically and checks for objects that have not been used, or since gone out of scope and deletes them.

However, C++ doesn’t have this feature. (Well not as such, but it’s beyond the scope of this blog post). C++ is a very powerful language, and it trusts that you know what you’re doing and this is true of object management.

C++ doesn’t clear down objects for you, you’ve got to do that yourself. In C++ for every, yes EVERY object that’s created with the keyword new, there MUST be a corresponding delete for it.

So consider the following code:

int main()
    Person* p1 = new Person();
    // now we're done with the person object.
    delete p1;

The above code not only creates a person object, but also deletes it.  An important thing to learn also is to delete things in the reverse order you create them. It’s considered good practice, and can prevent possible undefined behaviour.

The other thing new does, is that it creates your object on the heap. “So what?” I hear you ask. Well, let me explain a little about why this isn’t ALWAYS what you want…

A very quick note on stacks and heaps…

When you write a program, it can be stored in one of two places in memory. The stack, and the heap.

The stack is memory that’s allocated for a thread of execution. So for example, when your function is called, a block is reserved on top of the stack for local variables and some data pertaining to your function. When the function returns or hits the last brace, the block is then freed automatically and can be used for another function. Imagine if you will a stack of plates in a cafeteria, a memory stack is identical. Last one In, First one Out otherwise known as LIFO. So the most recently used block is always going to be the one that’s freed.

Stacks don’t tend to be that big, so you wouldn’t want to put a 512Mb vector on there for example. That’s what the heap is for.

The heap is a chunk of memory that’s allocated for dynamic allocation. There’s no enforced pattern like there is on the stack, on the heap you allocate your data pretty much where you want, as long as it’s contiguous space big enough to hold your data.

This is where your data is put when you use the new keyword in your code, and if you don’t call delete on your new’d object, then it will stay in memory thus swallowing up some resource.

So what are the options?

Now that we know where new’d objects go, what are the alternatives?  Well you can create stuff on the stack, however as previously mentioned, the stack isn’t as big as the heap. So we must be judicious in where in memory we place our objects. So for example, if you have a class object that you know is going to have a massive vector of objects in it, then place it in the heap using the new keyword.  Otherwise, put it in the stack.  One of the main benefits of this approach is that you don’t have to worry about deleting it, because as soon as your function hits the last curly brace, it’s popped and released from the stack.

To place something on the stack you’d do the following:

int main()
    // let's say that we have a person that takes a string as it's constructor
    // argument, you'd declare it thus:
    Person student("Joe Bloggs");

    // For a default constructor with no params you'd do thus:
    ConfigurationManager config;

Both these place objects on the stack rather than the heap.

Another option available to you, if you use a modern compiler (and why shouldn’t you?) is the use of smart pointers.  These have been around for some time now, but they almost (I’m not 100% sure) act like an object created with the new keyword in Java in that the second they go out of scope, their destructors are called and the object is destroyed. Now while this may sound similar to garbage collection, I’m reliably informed that it isn’t.

However there are a number of smart pointers that can be used.

unqiue_ptr is a smart pointer that holds sole ownership of an object via a pointer and will destroy it once the pointer goes out of scope. You can’t have a uniqe_ptr pointing to two objects. The code sample below shows a very basic demo of unique_ptr.

class someHugeObject
    // a whole bunch of functions in here....
    void someFunction();

void doSomethingWithObject()
    std::unique_ptr<someHugeObject> huo(new someHugeObject());
}  // then huo is deleted when we get to this line.  Even though we've used the new keyword.

shared_ptr is, as the name suggests a pointer that allows you to have multiple pointers pointing to the same piece of memory. So you could have many objects sharing the same block of memory. But unlike a unique_ptr, the object is destroyed under the following circumstances:

a) The last remaining shared_ptr owning the object is destroyed or b) the last remaining shared_ptr that owns the object is assigned another pointer

std::shared_ptr<Person> p1(new Person("Fred Jones")); // we create a new person
std::shared_ptr<Person> p2 = p1;  // now p2 has access to the person in p1

// so if we do:
p1.reset();  // the Person will still exsist because p2 is still pointing to it.
p2.reset();  // the last reference to the memory block has gone, so it now gets removed.

So in conclusion then, I can still hear my mentor’s words in my ear when he told me that I should never use new. And looking at this, you could say there’s a strong case for not doing so.  However I’d like to say:

Use the stack when you can, but when you need to, use the heap.  Also don’t be afraid to play, spool a VM, have a play, cause a stackoverflow and see what happens. It’s the best way to learn.

Happy coding people Smile

An interview with Kevlin Henney

I recently got the opportunity to do an e-mail interview with Kevlin Henney. He is a well-known author, engaging presenter, and a consultant on software development. He was the editor for the book 97 Things Every Programmer Should Know, and has given keynote addresses not just at ACCU but at other conferences as well.

How did you get in to computer programming? Was it a sudden interest? Or was it a slow process?

I was aware that computers could be programmed, and the idea sounded interesting, but it wasn’t until I was able to actually lay hands on a computer that I think it occurred to me that this was a thing that I could do myself.

What was the first program you ever wrote? And in what language was it written in? Also is it possible to provide a code sample of that language?

I can’t remember exactly, but I suspect it probably just printed “Hello” once. I strongly suspect that my second program printed “Hello” endlessly — or at least until you hit Ctrl-C. It was written in BASIC, and I strongly suspect that it was on a UK-101, a kit-based 6502 computer.

These days I am more likely to disavow any knowledge of BASIC than I am to provide code samples in it — but I think you can probably guess what those examples I just mentioned would look like!

What would you say is the best piece of software you’ve ever written? The one you’re most proud of?

Difficult to say. Possibly the proof-of-concept C++ unit-testing framework I came up with a couple of years ago, that I dubbed LHR. I don’t know if it’s necessarily the best, but it incorporated some novel ideas I’m proud of.

What would you say is the best piece of advice you’ve ever been given as a programmer?

To understand that software development concerns the management of complexity.

If you were to go back in time and meet yourself when you were starting out as a programmer, what would you tell yourself?

As a professional programmer? Don’t worry, it’s not all crap. As a schoolboy? Yes, it really can be as much fun as you think it is.

Do you currently have a mentor? And if so, what would you say is the best piece of advice you’ve been given by them?

I don’t currently have anyone I would consider a mentor, but there are a number of people I make a point of shutting up and listening to when they have something to say.

You are well known for giving excellent talks on various topics to do with Software Engineering, I recall the one you did at ACCU Conference last year. How did that come about? And how scary was it to leave the security of a regular 9 to 5 job and go solo?

I worked as a principal technologist at QA, a training and consultancy company, for a few years. Training was part of my job role and that gets you comfortable with presenting and thinking on your feet. Conference presentations are a little different as the objective of a talk and the environment of a conference are not the same as a course or a workshop, but there’s enough overlap that practice at one supports practice in the other.

As a principal technologist at QA I enjoyed a great deal of autonomy and so the transition to working for myself was not as jarring as it might first appear. Meeting people at conferences also opened more opportunities than I had perhaps realised were available when I was associated with a larger company.

I’m not sure I could have gone straight from working for someone to being independent. Actually, that’s not quite true: I went from being an employee to being a contractor many years ago, but I didn’t find that fulfilling.

And following on from that, what advice would you give to someone who’s looking to go it alone?

Make sure you know what your motivation is for going it alone, that your expectations are realistic and that you have some work lined up!

I’m guessing you work from home, if so, how do you keep the balance between work time and family time?

A question I’ve wrestled with for years and still not one I’m sure I have a good answer to! I am, however, far better at turning off than I used to be, recognising that work time is an interruption from family time and not the other way around. As I travel a lot the work–family distinction is often reinforced by whether I’m at home or away, so I try to get more work-related things done when I’m away because it doesn’t distract from family. I notice that when I’m working and at home the context switch can be harder because the context is effectively the same.

How do you keep your skills up to date? Do you get a chance to do some personal development at work?

I attend conferences, I talk to people I meet (and people I don’t meet) and I read. I probably get a lot more breadth than depth, but I temper that by focusing on things that interest me — so I’ll freely admit to being more driven by interest than necessity.

I’ve seen that you contribute to the Boost libraries as well. How did you get involved in that? And what advice would you give to a prospective developer looking to get involved in such a project? Or any open source project for that matter.

My involvement came about primarily because of my involvement in the C++ standards committee and writing articles about C++. That said, although I have a continued interest in Boost, I am no longer an active contributor, having long ago passed maintenance of my contributions to others.

As for advice on doing it: if you think you want to get involved, then you should. It’s worth spending your time familiarising yourself with the ins and outs and mores of your project of interest, asking questions, getting a feel for what you can best contribute and how. If you’re a developer, don’t assume it’s going to be coding where you stand to learn or contribute the most — maybe it’s code, maybe it’s tests, maybe it’s documentation, maybe it’s something else.

What would you describe as the biggest “ah ha” moment or surprise you’ve come across when you’re chasing down a bug?

That good practice I ignored? I shouldn’t have ignored it. I don’t know if that’s the biggest surprise — in fact, it’s the exact opposite — but it’s the biggest lesson. There’s nothing quite like the dawning, creeping realisation that the bug was easily avoidable.

Do you have any regrets as a programmer? For example wishing you’d followed a certain technology more closely or something like that?

Listing regrets or indulging in regret is not something I really do, which I would say is no bad thing — and not something I regret.

Where do you think the next big shift in programming is going to come in?

Realising that there are few big shifts in programming that change the fact that, ultimately, it’s people who define software. We have met the enemy and he is us.

Are you working on anything exciting at the moment? A new book? Or a new piece of software?

There’s a couple of code ideas I’m kicking around that I think are quite neat, but perhaps more for my own interest, and a couple of book projects that have my eye.

Finally, what advice would you offer to kids or adults that are looking to start a career as a programmer?

Look at what’s happening now, but also look at what’s gone before. If you can figure out they’re related, you’re doing better than most.

Why version control is a life saver…

It was late, and I’d done a long day up to that point. And I was mucking about with some inconsequential code. In fact, I was trying to delete the directory, as I’d finished what I was doing with it.

So I executed that fateful command: rm –rf parserdemo/ *

Now I’m sure almost all my regular readers will know straight away where I went wrong. I didn’t immediately. I was tired, and aching to get home, but after two minutes I thought, “that directory wasn’t that big!!” so I control-c’d it and did an ls.

It wasn’t a pretty sight. Almost every project I’d worked on had gone from my dev directory. Even more worrying was the work I was working on that morning had gone. Now I usually commit my code at the end of the day, (when I remember) so I cd’d in to the the project’s directory, looked ok, maybe I cancelled it in time, but no! I did a git status and got a nasty error message.

Fortunately for me, I actually committed the work earlier that day as I’d finished the ticket I was working on. So fortunately I could blat the directory, re-clone the project and check out against the appropriate branch and sure enough there was the code I’d lovingly written that morning.

The moral of this lesson?

Well first of all, be VERY CAREFUL using rm –rf, because it will just chomp through anything and everything if you’re not careful.

Secondly, if you don’t use version control, then do so.  It’s there for a VERY GOOD REASON.  And these days you can have your code hosted for free on github or bitbucket or any number of free source code repository sites.

Thirdly, commit your changes often. Because you never know when you need to roll back to something you did by mistake. With git this is very easy, I don’t have experience of other version control systems. I do recall using one tool that you had to check out individual files, then check them back in again against a central repository, you didn’t have the code on your hard disk the way you do with git. But I digress.

The fact of the matter is, I was tired and a wee bit irritable by that point of the day, and I hadn’t realised what I’d done until I looked at the command more carefully. But it’s the kind of mistake you’ll only make once.  Hopefully…

An interview with Scott Meyers

Unless you’re very new to C++ programming, or have been living in a cave, then you will have heard of Scott Meyers. He is the author of the best selling book Effective C++, and his latest book Effective Modern C++ has just hit the bookshelves. Scott has been a C++ consultant and trainer for at least twenty-five years. And he trained me on the new stuff in C++ in September, and so I got to carry out this interview face to face.

How did you get in to computer programming? Was it a sudden interest? Or was it a slow process?

I started programming in grade school, and they had a time sharing system, and there was a math teacher who recruited me and a couple of other people. And she thought that we might enjoy programming, which started out by playing games. This was on a teletype system with a piece of paper in it, and it was a 110 baud teletype thing. You could play still some remarkably good games on that system.

After we played some games, we thought it would be fun to learn how to program. Which was probably the math teacher’s idea, to learn how to program. And we started to do that after school, as did my friend. And one thing led to another, and before you know it you’re spending way too much time in a former cloakroom closet with a teletype until they make you to go home at six o’clock at night.

It was probably a small room as well? Probably about the size of a large wardrobe?

It was a former “cloak” room. So it was like a walk in closet that had a teletype, which made a lot of noise. It was a mechanical device, so they stapled eggshells as noise dampeners all the way around. So it was this narrow room, with two young men, and a machine that made a lot of noise. It must have smelled horrendous!

What was the first program you ever wrote? And in what language was it written in? Also is it possible to provide a code sample of that language?

I don’t remember what the first program I ever wrote was. The earliest program off hand I can remember writing was a couple of years after that, but it was a horse racing program. Which had all the sophistication you would expect from somebody who was fourteen years old.

It basically involved as I recall, several horses, that had to go a certain distance and each iteration you chose a random number that would determine how much further they went; That’s about as sophisticated as it got. But it was kind of cool, because it’s on a teletype system. You can’t show things in real time, so what I’d do was have it type out x’s–like a histogram. But it’s a 110 baud teletype, so you’d see the all the x’s, and then you had to wait before it printed out the next set of results. So there was a certain degree of suspense in trying to figure out which horse was going to win. Nothing fancy, I’m sorry.

What about a modern language? Such as C or C++?

In C, because I learned from Kernighan and Richie, I wrote Hello World. In C++ I also wrote Hello World. Because there’s a lot of merit in making sure you have all the pieces together to write computer programs.

What would you say is the best piece of advice you’ve ever been given as a programmer?

I don’t know what the best piece of advice I’ve been given is, but I’m going to turn the question around a little bit. In my youth, there came a point where I thought I knew everything. And I was working as a software developer at the time. And I made a comment about blah blah blah being impossible because of blah! And another guy looked at me and went, “you know…” and laid it out for me, that it was not impossible, and how it was not impossible and it turned out I didn’t know everything. And that was an important learning experience for me, to recognise that there’s a lot of stuff I don’t understand.

And what I’ve since learned over the years is that there’s a reason for everything. And if you look at something and it makes no sense or it seems crazy or it seems stupid, there’s a reason for it. And whoever came up with whatever you’re looking at, or whatever group came up with what you’re looking at, they had a reason for doing it. And it was probably a reasonable reason.

I’ve found that to be extremely helpful over the years, just to say “why were these things done?” Someone had the goal of achieving something good. So if you’re looking at something that’s overly complicated or doesn’t look correct, then it’s important to find out why it’s done the way it is.

You are very well known for the Effective series, how did that come about? Did anything make you decide “there needs to be one of these”?

I was working as a trainer at the time, and I was training C programmers using five day hands on courses to learn C++. This was in the early 1990’s. So C++ was a simpler language back then, there were no exceptions, no templates, so it was as simpler thing. But when you teach C programmers, C++ programmers forget how much how much has to be learned. For the poor C programmers, they don’t know what a class is, they don’t know what a virtual function is, they don’t know what inheritance is, they don’t know what overloading is, they don’t know what constructors are, what destructors are, they don’t know what references are and how they’re different from pointers. The list goes on and on.

And so I would teach these poor C programmers over the course of five days how to use C++, and by Friday afternoon, their heads were swimming. They were thinking “I’m never going to be able remember all this.” And five days is not a lot of time. So on Friday afternoon, I’d write on the whiteboard, and tell them “It’s not that hard, let’s list the stuff you really need to remember.” So for example, if you have a base class, you need a virtual destructor, or if you have a class with pointers then you need a copy constructor and an assignment operator, or you should never redefine a default parameter value, these sorts of things.

And I found that this made them feel better. So that’s how the list of items began. I taught somewhere, and I don’t recall where, and the group said “you should write a book.” And I said I’m not going to write a book. Then another place said, “you should write a book,” and at that time I was working on my PhD, and I thought “well, I could work on my PhD, which is hard, or I could write a book, which can’t be as hard as working on a PhD.” So I decided to write the book instead. It was a spontaneous decision, and the book was well received, and everything was derived from that.

If you were to go back in time and meet yourself at 14 years old, when you were in the teletype room, what would you tell yourself?

You don’t know everything. The world is more complicated than you think it is. If I learned that lesson a lot sooner, that would have been useful

If you started your career as a programmer now, what would you focus on? Would it still be C++? And which field would you be interested in working in?

I think that for me personally, so much is a matter of happenstance. So the first language I learned was BASIC. I learned BASIC because the teacher said “Why don’t you learn BASIC?” And for that matter, the reason I learned C++, was because I in graduate school, I was required to TA [be a teaching assistant for] a course on software engineering, and the professor of that course decided “we’re going to use C++, so you must learn C++”. I didn’t choose either one of those things, they just sort of happened to me.

So I’m going to turn your question around a little bit, and say if it was my goal to introduce new people to programming, then what would I do? And what I would do is something mobile. I think that mobile devices are really interesting to people, and you can do all sorts of cool stuff. And I would do something using some technology which would allow people to get a lot done really quickly, with really fast turnaround. Because I think that’s what really hooks people: I can get this stuff to work. So the Hello World for me would probably be a way to very quickly write a little application that would send a message using some technology from one mobile phone to another using giant libraries that the programmers would have no idea about how they worked..

For a lot of new developers, it’s important that they see something happening, so when I started on Visual Basic for example, you’d draw a button, write some code behind it, click on it, and something would happen, so the fact they can see what they’ve written actually does something is quite important.

I think the immediate feedback combined with high likelihood for success is what draws people in. At some point you have to get in to the nitty gritty stuff. But I wouldn’t start with C++. For one thing it’s not fun, it has no graphics, it has no networking. A lot of stuff is missing.

What would you describe as the biggest “ah ha” moment or surprise you’ve come across when you’re chasing down a bug?

The one that comes to mind is when I found out that the cause of the problem I was running in to was an instruction in my program that was comparing two floating point numbers for equality. And it hadn’t occurred to me that because two things are mathematically equivalent, that does not mean you’ll get the same result on the computer. So I spent a lot of time tracking that down, and I’ve never forgotten that.

Have you got any tips for any new programmers that are chasing down bugs at the moment.

I would say that like most things, the way you get better at it is by doing more of it. But you have to do more of it, but learn at the same time. So seek out other people who are better at it than you are, and try to continue to get new sources of information. If you’re tracking down a bug, and it’s difficult, get someone to help you. Ask other people what their ideas are, and that will help you develop an intuition of what to look for.

Do you have any regrets as a programmer? For example wishing you’d followed a certain technology more closely or something like that?

You know, I really don’t. I’d like to know more about some areas, but at the same time, I’m happy where I am. As a specific example, in the mid 1990’s when Java became very popular, a lot of people in the C++ community went “this is a cool thing!” and they moved across to Java, and I decided not to move to Java. Primarily because I thought “I’ve already learned one programming language, and the last thing I need to do is to prove that I can learn another programming language.” Which allowed me to completely and utterly miss the boat on Java. But as a result, I’ve stayed constant to C++, and I’ve learned a lot from it, so I don’t really have regrets as far as that goes.

Do you code in other languages? Such as Python or other scripting languages?

C++ is my language. It’s the only thing I do. If I were an actual programmer, I’d have to do a lot more. There are some other languages I can do a little bit, but C++ is really what I do.

From what I’ve read of your website, a lot of your work today is training and consulting, so I was wondering how do you make your code samples relevant to what the programmer may face in their everyday work?

What I try to do is to have enough contact with real programmers on a regular enough basis that I get feedback from them as to whether what I’m advocating or what I’m saying makes sense. And because I’m not a practicing programmer right now, and because I work by myself and I’m not surrounded by a bunch of other people in a company, I try really hard to stay in touch with real programmers developing real code.

My experience is if you give people advice that’s not practical, they will tell you right away. If you give them advice that’s too simple, they’ll tell you right away. For example we just spent the last two days talking about this sort of stuff, and people ask questions, people make comments, they have funny looks on their faces. And you get feedback as to whether what you’re telling them seems relevant to their job. And that’s my primary goal.

I feel I should apologise, that I typed in some of your code during the course, and it didn’t do what you said it would do.

I love that! It reminds me I was in China at one time, and I was teaching a course and it happened to be on the internals of how certain things work. And I was talking about how virtual function tables are implemented under multiple inheritance.

So I do my presentation and I go “that’s how virtual functions are implemented under multiple inheritance.” And this one guy looks up from his computer and goes “no it’s not!”

So we looked at his compiler, and did some disassembly, and it turned out that the information I had was completely accurate for one compiler, and he was using a different compiler, which did things in a different way. So this is an example of how people keep me honest. So he learned that there were other ways to do it than that one compiler he was using, and I learned that there are other ways to do it other than the compiler I was using.

So I love it when people point out stuff that I wasn’t aware of.

Are you currently a mentor?  And if so, what do you do with your mentees?


Did you ever have a mentor yourself?

Certainly not by that name, I certainly learned from other people, but I never had a formal relationship with someone who was supposed to help me improve what I do. And in retrospect I learned mostly from peers with more or less the amount of experience as I did, but I wasn’t a practicing programmer for all that long. I was a programmer for three years.

As a trainer, I suppose you read quite a lot? What would you say is the best book you’ve read?

Nothing comes to mind. What I will say is that I don’t read many books. I read a giant amount of blog posts, I read Stack Overflow when I get a chance, or when I’m research, I read a lot of e-mail that I exchange with other people, I read a lot of papers, I read a lot of online shorter stuff. I don’t read a lot of books. For example, we talked earlier (in the course) about Anthony Williams’ book, C++ Concurrency in Action, I didn’t read that whole book. But for example, the chapter he has in there on the memory model is just killer! So I was very pleased with that. I don’t tend to sit down and read entire books, because in the C++ area, it’s uncommon to find a book that is filled with stuff I don’t already know.

So how do you go about learning new techniques then? For example, when the move paradigm came in C++ 11/14, how did you go about learning that? I suppose I’m really asking what’s your learning style? Do you read then dive in or do you dive in or is it a mix?

It depends on what I’m trying to learn. Let’s take something like C++ 11/14, something that’s technically defined by the standard. What I normally do in those conditions is start with blog entries, where people have written overview blog entries or something like that, so I can try to build a mental model. And if I have a questions, I get the questions answered in various ways, sometimes I turn to Stack Overflow, sometimes I turn to people that wrote standardisation proposals. I read a lot of standardisation proposals. I play around with compilers too, but I have to say that when it comes to standardisation-related stuff, compilers can be helpful, but if you want to know what the standard says, then you need to know what the standard says. So playing around with code is less useful, especially with technology that is newer, because compilers may not have implemented it yet.

On the other hand, for example we had that question in the course about what happens if you have an infinite loop in a constexpr function. To me, that’s a question compilers need to answer.

So we have an update to Effective C++, will you be doing an update to More Effective C++ and Effective STL? Or are all the updates incorporated in the new book?

To be clear, I didn’t update Effective C++. The new book is called Effective Modern C++. It has completely new information, so it’s not an update of Effective C++.


No that’s fine; actually I’m eager to get the word out about that. It’s a completely new book. It would have been a lot less work had it been a new edition.

As to the question of whether I’ll be updating any of my other books, that remains to be seen. I will say that it’s unlikely that More Effective C++ will get updated. That book is now almost 20 years old. A lot of the information is still useful, but I’m not sure at this point it’s really worth updating.

Where do you think the next big shift in programming is going to come in? I’ve noticed you’ve done some stuff with D. Do you see that as the next big thing? And for the uninitiated, what is D?

I’m not doing anything with D, actually. There are people in the D community that would like me to do something with D, people who I respect. But that’s not my plan. I was asked to give a keynote talk at the most recent D conference, that’s what I did. I went and gave a talk to the D people, and I basically encouraged them to avoid creating a language as complicated as C++, that was basically my message for them.

As for the next big thing, I’m really bad at predicting that sort of thing, so I’m not going to even try.

Finally, what advice would you offer to kids or adults that are looking to start a career as a programmer?

The people who I know who are good at programming, who are happy with their lives–that kind of stuff, they do it because they love it. And so I would say that if you are looking at it as though to say “this would make a nice career” but your heart isn’t in it, maybe you should be looking elsewhere. On the other hand, if you play around with it, if it seems fun, if you like the idea of controlling machines or if you want to accomplish something, then you should just do it. Because it can be fantastically rewarding. It’s a “Follow your heart” kind of thing. Because if you don’t like it, it’s hard!

Thank you very much for your time.

You’re very welcome.

An interview with Seb Rose

Seb Rose is well known to most ACCU members. For those who do not know who he is, Seb is an independent software developer, trainer and consultant based in the UK. He specialises in working with teams adopting and refining their agile practices, with a particular focus on automated testing.

Since he first worked as a programmer in 1980 (writing applications in compiled BASIC on an Apple II) he has dabbled in many technologies for many companies, including Linn Smart, Amazon and IBM. He has just finished writing “The Cucumber-JVM Book” for the Pragmatic Programmer’s. His website can be found at

How did you get in to computer programming? Was it a sudden interest? Or was it a slow process?

The maths department at my school had a PDP11 and a room with 8 terminals. I spent an afternoon watching someone write BASIC and wrote my first program the next day. The head of department then gave me a few photocopied sheets from Kernighan & Ritchie’s “The C Programming Language” and I started learning more about what was happening under the surface.

A few years later I got my first holiday job working at a service station on the A3, getting paid the princely sum of 60p an hour. It sounds grim, but I earned enough to buy a record deck (anyone remember the PL-512?), which meant I needed even more money to buy albums. A neighbour put me in touch with his accountant who was in partnership with a guy in Teddington who was writing accountancy software in his attic. I went round for a chat and showed off my BASIC skills and landed the job. He apologetically told me that all he could afford to pay was £3 an hour – a 500% pay rise! I was ecstatic and I’ve been programming (on and off) ever since.

What was the first program you ever wrote? And in what language was it written in? Also is it possible to provide a code sample of that language?

BASIC. Nothing very exciting, just asking for input and printing out a response. I then took a huge leap and tried to write a text based Dungeons and Dragons game in C. It never worked, but I spent many a happy hour tinkering about with source code on huge print-outs. I returned to BASIC (compiled for the Apple II) for my holiday job and spent a lot of time doing screen layouts using 80 x 24 grids. There were some lovely hacks for positioning the cursor at a specific location on the screen which we used to access via GOSUBs whenever we needed to present output.

What would you say is the best piece of advice you’ve ever been given as a programmer?

Tricky question. Maybe the advice from Steve Freeman and Nat Pryce to “Listen to your tests.” It’s all too easy to blame a technique (or tool) that you’re trying to learn, when actually it’s your failings in other areas that are the root cause of the problem. When I find it hard to write a unit test, I remember their advice and look at the code I’m trying to test with a critical eye.

The old favourite “don’t optimise (yet)” is regularly useful. As developers we often think about performance prematurely. The “shameless green” stage of TDD encourages us to get the test working without thinking too hard about the design. It’s the “refactor” step where we improve the design of the code, but even here performance should generally be a subsiduary concern to readability. There will always be situations where you need every ounce of performance you can get, but they are few and far between in most domains, and you should only pursue optimisation once you actually have the data to show what actually needs optimised.

If you were to go back in time and meet yourself when you were starting out as a programmer, what would you tell yourself?

I’d probably tell myself: “Good choice.” It’s hard to think of a career that has so many varied opportunities or that would have allowed me to work in so many different areas. I’ve been a freelancer for most of my career, and the occasional breaks between contracts have been invaluable for learning new skills and trying different things.

I ran an organic smallholding for 12 years, at the same time as working as a contractor. I tried to have the contracts end in the spring, so that I could spend the sunny, Scottish days doing more physical work outdoors. For 3 years around 2003 I ran the organic business full-time and didn’t do any commercial programming. I actually found that I spent more time at a desk during this period than I did when I was contracting – maintaining the website and order system, generating delivery reports, dealing with customers etc.

It was a relief to be able to return to programming when, after our best year trading, the accounts showed I’d only made £12,000 from delivering organic produce.

Do you currently have a mentor? And if so, what would you say is the best piece of advice you’ve been given by them?

I haven’t got a specific mentor, but I treat most of the people I meet as potential mentors in one way or another. Recently Jon Jagger told me he spends 1/3 of his time earning money, 1/3 working on OSS and 1/3 fishing. I’ll not be taking up fishing, but this seems like a mix to aspire to.

How do you keep your skills up to date? Do you get a chance to do some personal development at work?

I spend a lot of time reading blogs and books, as well as going to conferences. In fact, you could say that my job IS keeping up to date, at least with a small segment of the industry.

Twitter is a great source of information – not the throw away one liners, but the links to blogs that I wouldn’t normally notice.

And I organise the local BCS branch events, which forces me to talk to a lot of people and actively seek out new and interesting topics and speakers.

What would you describe as the biggest “ah ha” moment or surprise you’ve come across when you’re chasing down a bug?

Working with Weblogic in the early days of EJB 1.0 we were getting inexplicable , intermittent failures. Days of investigation later, having boiled the sample code down to something really small, it became clear that there actually was a problem in the application server. We submitted a bug report and, in due course, received a patch. Months later the problem reappeared – the supplier had rolled out an upgrade WITHOUT the patch.

I remember the first time I heard you talk was at the ACCU Conference where you were talking about Test Driven Development. There has been a lot of talk of late of TDD is dead (among the more extreme things I’ve seen) ( I’m guessing you wouldn’t agree with that assessment? Or is it a case of the way TDD is done, and how it’s implemented that’s the issue?

I’m still talking about TDD. My current opinion is that TDD is a technique that is generally useful, but that context is, as always, an important consideration. The arguments stem from consultants making general statements that address segments of the development community without making it clear which segments they apply to. I say more about this in my session “So long, and thanks for all the tests” which is online at

The short answer is that TDD is not dead, but neither is it a silver bullet. It’s a useful technique to have in your toolkit, but like so many techniques it’s easier to describe than do. That’s why I think all developers should learn and practice TDD, at least until they know it well enough to make a considered judgement about whether it’s useful for them in their current role.

How scary was it to go from full time developer to free-lance developer? And how long did it take for you to feel confident to go for it?

I graduated in 1987 and did my first contract in 1992. In the intervening period I did 2 full-time jobs, one for 8 months and one for 5 months. So, I think it’s fair to say that I’ve always been freelance. I guess that confidence has not been too much of a problem for me.

That’s not to say that I haven’t worried about where my next paycheck will come from. I have, and at times where continuity is more important to me, such as when my children were young, I have returned to permanent employment. Strangely, though, I’m generally less stressed working as an independent than an employee – and I don’t think I’m alone in this.

Do you have any regrets as a programmer? For example wishing you’d followed a certain technology more closely or something like that?

I wish I’d read more books and bought fewer.

Where do you think the next big shift in programming is going to come in?

I have no idea. In the nineties, when first introduced to HTML and the internet, I said the equivalent of “it’ll never catch on.” Niels Bohr is credited with saying “Prediction is hard (especially about the future)” and I’m worse than most at making predictions.

Finally, what advice would you offer to kids or adults that are looking to start a career as a programmer?

I think I’d give the same advice for any domain, not just programming. Have a go. Do something that interests you and keep trying. Qualifications may help, but enthusiasm and aptitude are by far the most important ingredients.