Author Archives: welshboy2008

Vtables…a quick look

I’m sure it’s all a term we’ve heard in our C++ lives right? But what are vtables? What do they give us in C++?

Well, they’re kind of a big deal actually.

First of all, as you’ve probably guessed, it stands for Virtual Table.  But secondly, and perhaps more importantly, without vtables, we wouldn’t have runtime-polymorphism available to us in C++, as all the references to the functions would be bound at compile time.

So what is a Vtable?

Well, it’s a table of function pointers.  Each object that has a virtual function, has a vtable pointer, which points to the virtual table for objects of its type.

Let’s have a look at some code.

Consider:

struct baseStruct
{
    virtual void createWidgets() {}
}

struct derivedStruct: public baseStruct
{
    virtual void createWidgets() {}
}

void makeWidgets(baseStruct * widget)
{
    widget->createWidgets();
}

int main()
{
    derivedStruct d;
    makeWidgets(&d);
}

The widget pointer in makeWidgets is pointing to a baseStruct object. At run time, the code should call derivedStruct::createWidgets since baseStruct::createWidgets() is virtual.

It only needs to look up the entry for baseStruct::createWidgets() in the virtual table, and then calls teh function the entry in the vtable is pointing too.

At compilation time, the compiler can’t know which code is to be executed by widget->createWidgets() call since it’s not known at compile time what widget points too.

I’m still learning myself, but this is very much a quick overview of what a vtable is.

Happy coding

Advertisements

Looking under the hood….

Over the last year now, I’ve been attending the UK BSi C++ Panel meetings in London, and it’s been an eye opening experience. I must confess that often I sit there listening, and taking lots of notes of all the stuff I want to look up when I get home, (it’s a long list…) But I was having a chat with a friend of mine, to whom I mentioned “I want to get to look under the hood of the language, so I can follow these disucssions…”

The glint in his eye should have been my cue to run away, but I didn’t see it, so eager I was to learn more, so he gave me some helpful hints as to where to start, and that’s how this blog post came about.  I’m going to be writing about Reference/Value semantics.

Let me say right now, I’m not a deep expert, this article is the result of a lot of reading, and trying to get my own head around it all.

C++ has value semantics by default, whereas other languages such as Java, Python etc have reference semantics by default. This clearly marks C++ as a different breed of programming language, which also raises some interesting questions (which my new mentor challenged me to think about…):

  • What does it mean to have value semantics by default?
  • How does this make a difference in C++?
  • How does this make C++ different to other languages?
  • What are the implications of these differences in regards to:
    • Performance?
    • Memory usage and allocation?
    • Resource management?

What does it mean to have value semantics by default?

Maybe a good starting point would be “What does value semantics mean?”  In its simplest terms, value semantics is a term used to describe a programming language that is primarily concerned about the value of an object, rather than the object itself.  The objects are used to denote values, we don’t really care about the identity of the object in such a programming language.

Now, it’s important to note that when I speak of objects in C++, I don’t mean the Java or Python definition of an object, which is something that’s an instance of a class, and has methods and such. In C++, an object is a piece of memory that has:

  • an address (@0FC349 for example)
  • a type (int)
  • capable of storing a value (42)

This leads to another important factoid to consider. Because the sequence of bits stored in memory can be interpreted differently depending on its type. For example the binary value in memory of 10000001 can be seen as 65 if the type is a short int, yet in can also be interpreted as ‘A’ if the type is a char.

Now it’s important to note that C++ has value semantics by default. That is to say, there are no keywords or special symbols you need to use, to tell the language that you’re using value semantics.

Consider the following code snippet:

x = y;

What’s going on here? Well the = isn’t an equality operator in C++, it’s an assignment operator. And in this context, the value of y is being copied to x.

But X isn’t a value, but an object. So Why isn’t X a value? Well, it’s because it can be 1 at one moment, 45 at another time. So if we want to know the value of X, we’d need to query the address X is held in to get that value.

So why did C++ go down the route of having value semantics over reference semantics?

  • Allocating on the stack is faster than allocating on the heap.
  • Local values are good for cache locality. If C++ had no value semantics, it wouldn’t be possible to have a contiguous std::vector, you’d simply have an array of pointers, which would lead to memory fragmentation.

Why use value semantics then?

You get value semantics by default in C++, but you need to make a specific effort to use reference semantics in C++ by adding a reference or pointer type symbol (&, *)

Using value semantics we don’t run in to memory management issues such as :

  • No dangling references to a non-existent object
  • No expensive and unnecessary free store allocations.
  • No memory leaks.
  • No smart/dumb pointers

It also helps to avoid reference aliasing issues in multi-threaded environments.  Passing by value and ensuring each thread has its own cope of the value helps to prevent data races.

You also don’t need to synchronise on such values, and the programs run faster, and safer as you avoid deadlocks.

It’s also beneficial for referential transparency. This means that we get shocks or surprises when a value is changed behind the scenes.

And using Pass by value is often safer than pass by reference, because you cannot accidentally modify the parameters to your method/function. This makes the language simpler to use, since you don’t have to worry about the variables you pass to a function, as you know they won’t be changed and this is often what’s expected.

Then when do we use Reference Semantics?

We use reference semantics when something has to be in the same location in memory each time.  A good example of this would be something like std::cout or any such global.

You also use reference semantics when you want to modify the value you’re passing to your function, and this is made explicit in C++ by passing a reference pointer to your function.

e.g.

void foo::do_something(int & some_value) {
...
}

This is just a starter for 10 type article, I will go deeper in to this as time goes on and I learn more 🙂

In the mean time, happy coding.

 

In The Beginning Was The Command Line…

More years ago than I care to admit, a friend of mine loaned me a copy of Neal Stephenson’s In The Beginning Was The Command Line, and miracle of miracles, I’ve finally managed to finish reading it.  (Sorry Chris!)   It really got me thinking about how I use a computer.

It’s a really thought provoking book.  Given it was written in 1999, it was surprising to me, to see how little has changed in one sense in the way that we use our computers. When Stephenson was writing this book, Mac OSx was on the design board, given that Apple had bought out NeXT back then.  Microsoft was still pushing Windows 95, 98 and Windows NT, which as we’re all aware are UI based operating systems.

And he makes the point that the OS software is no longer letting the user interact directly with the computer, but rather it decodes what the user wants to do, and issues the right calls, to make sure that it happens.

I for one am not a fan of that, as a programmer, I want to know EXACTLY what my computer is doing.  (The irony is not lost on me as I type this blog post up in the WordPress desktop application)  And I’m sure I’m not alone in that the first experience of a computer I had didn’t involve a window, but rather a flashing prompt, wating for a command to be entered, or in the case of my ZX Spectrum, a BASIC keyword to be inserted.

And I for one really miss that.  Don’t get me wrong, GUI’s have their purposes, but as Stephenson says in his essay, it puts different layers of abstraction between the user and the hardware on the machine. And it can be both a blessing and a curse.

I code mainly in Linux, and therefore I have a UI editor, which at the moment is Visual Studio Code for my C++/Node.JS stuff, or IntelliJ if I’m doing stuff with Java, but I’ll always, and I do mean always, have a terminal window open.  Because frankly it’s a lot faster for me that trying to find the right option in the UI.  The only reason I use an IDE these days, is that it’s faster to navigate across the project structure, rather than remembering which directory a certain file lives in. So both can live in perfect harmony.

But the command line can also have danger for the uninitated as well.  As I found out to my cost in the early days of using my first PC.  I was in the process of formatting some floppy disks, or at least I was trying to, when I noticed that the drive wasn’t making any noise, but the hard disk light was flashing quite a bit.  I hit control-C, and found that I’d blatted most of my hard disk! (A mighty 40MB back then…)  I thought I’d entered the correct command, but clearly I hadn’t, and so lost about a year’s worth of…..well….accumulated Shareware games…(not a great loss), but I’d also wiped out half of my operating system, which was MS-DOS 4.2.

While there’s dangers in misusing the command line, I also agree with Stephenson that OS companies are doing their users a disservice by hiding commonly used tools in some sub-menu of an application.

He cites the UNIX command wc as an example.  In UNIX, wc gives you the number of words/characters in a file (depending on the arguments you pass it.)  So to get the word count of a file you’d do:

wc -w <filename>

So if I had a file called greeting.txt with the text Hello World in it, the command would return:

emyrw@lothal:~/temp$ wc -w greeting.txt 
2 greeting.txt

Now, if I tried to find that in Word or something like that (although I know there’s a word count on the bottom bar of the application) then you’d need to a) know where to look, or b) know which sub-menu the word count lives.

When I was at university, I wrote my dissertation on an installation of Linux (I think it was Gentoo) and I had a terminal window open, and every time I wanted a word count, I ran the wc command.  For me it was much faster than trying to remember a menu somewhere.

Now I totally get it, not everyone wants to interact with a computer using a command line interface, but I hope that it’s not consigned to history as something old-hat.  (I’ve had someone say that to me once…)  The command line is one of the most useful things to learn to use. If your OS goes bang, then if you know how to navigate around using the command line, and which files to edit, then you have a chance of recovering your system.

If you’ve not read Neal Stephenson’s book, I can’t recommend it enough.

An Interview With Kate Gregory

Kate Gregory is a C++ expert, who has been using C++ for nearly four decades. She is the author of a number of books, and is an in-demand speaker who’s given talks at the ACCU, CppCon, TechEd and TechDays among many others. She is a Pluralsight author, a Microsoft Regional Director, and a C++ Microsoft Valued Professional, and despite her hectic schedule she still manages to write code every week.

  1. How did you get in to computer programming? Was it a sudden interest? Or was it a slow process?

I did my undergrad work at the University of Waterloo. I started in the Faculty of Mathematics and they taught us algorithms and Fortran as a first year course. I didn’t choose it, but I had to do it. Other such courses followed, and when I transferred to engineering I discovered this was a useful and in-demand skill. I got opportunities to program on my co-op jobs, and it kind of grew from there.

  1. What was the first program you ever wrote? And in what language was it written in?

I don’t actually remember, but it’s a good bet it was assignment 1 in that first year Algorithms/Fortran course. And yes, punch cards were involved. My first program for money was a simulation of the way scale grows inside a pipe – in the piping of steam turbines scale forms in layers that can spall off and cause tremendous damage, so understanding that is an important problem. It led to a published paper for the researcher who hired me, and an award-winning co-op work term report for me. I probably should have demanded credit in the paper.

  1. What would you say is the best piece of advice you’ve been given as a programmer?

Sleep when the baby sleeps. It’s the best advice ever, and I pass it on whenever I can. Second best: the smaller the problem is, the more ridiculous it will be when you finally find it, the harder it is to find. Don’t feel bad about that. Laugh when you finally find the one character typo or the wrong kind of bracket or whatever the tiny thing is that has kept you aggravated for hours.

  1. How did you get in to C++? What was it that drew you to the language?

In the late 80s, I needed to write some numerical integration programs for these multiple partial differential equations I was tackling for my PhD work on blood coagulation. Fortran, PL/1, COBOL, MARKIV and the like were just not going to work for me. My partner was doing some C++ at the time and it seemed like it was going to be much better. Turns out, it was! I had experienced the misery that was the Fortran “common block” so I didn’t want to use Fortran any more, and the other languages were mostly about manipulating text and records, turning input into output. C++ was a better fit for working with numbers, for implementing an algorithm, for giving me what I needed to show some properties of those equations.

  1. Since 2004, you have been a Microsoft Valued Professional in Visual C++, how did that come about? That must feel pretty awesome?

MVPs are chosen primarily for their generosity. You can be a complete and utter expert on the C++ language or on Microsoft’s products, but if you don’t share that and help people with those tools, you won’t get the award. Mine I believe was triggered by my books, and more recently my activities on Stack Exchange sites, backed up of course by conference speaking. It’s nice to have that effort recognized. What I like best about the MVP program is the access it gives us to the team. I can reach just the right person if I have some issue with the Microsoft tools, and get advice or an explanation or “we’ll fix that in the next release.” Of course, the award certificates look good on my “bookshelf of showing off” as well.

  1. You hold an incredibly busy schedule between speaking at conferences, travelling, and doing Pluralsight courses, how do you keep your skills up to date?

It’s part of my job to stay current. If I spend an hour (or an afternoon) swearing at a development environment on a platform I don’t normally use, well that counts as work. It’s as valuable work as preparing a talk or doing something billable for a client. So is reading long documents about what’s new in C++ or trying out a new library someone has released. What’s nice for me is that once I’ve put that learning time in, I can use it in many different ways – as the backbone of a talk, a blog post, to help a client going through the same thing, as part of a course, and so on.

  1. If you were to start your career again now, what would you do differently? Or if you could go back to when you started programming what would you say to yourself?

I came up through a sort of golden age. You had to teach yourself things, or find someone to teach them to you, because there wasn’t a lot of training available. But then again, people didn’t demand credentials or challenge your background. If you said you could do something, the general response was to let you go ahead and show that you could. I think I would just reassure myself that my somewhat unusual path was going to work out to be amazing. I really only had a traditional job for two years after I finished my undergrad work. By the time my grad work was done I had a business with my partner, and we have made our own path for three decades now. It’s had some ups and downs, but I don’t think I would actually change any of it.

  1. If there is such a term, what would an average working day look like for you?

Oh there are most definitely no average days. I have routines when I’m home – swimming in the morning, coffee and email before I get out of bed – but I do so many different kinds of work that it’s hard to characterize. I try to react to my moods if I can – some days are better for writing a lot of code, others are better for big picture design whether of code, a course, or something I’m writing, and still others are the days when you have to catch up on emails, phone calls, paperwork, and buying things. Some days I might be elbow-deep in code when it’s time for my evening meal and I just keep right on working well past when I should have gone to bed. Other days I stop in the afternoon and for the rest of the day I just do something – anything – that isn’t work. It’s nice to be able to work according to my own rhythms. I have to be diligent about deadlines and promises, and some days I have to do things that are a little suboptimal because I have no more room to rearrange, but for the most part I do things I like from when I get up until when I go to bed, I do them side by side with my partner (my husband is my business partner) and I get paid for it, so that’s a pretty nice life, isn’t it?

  1. What would you say is the best book/blog you’ve read as a developer?

The Mythical Man Month got me thinking about the big picture of managing teams and people, managing projects, instead of just writing code. And it showed me that people can disagree and best practices can change. While I rarely draw on specific facts or quote from it, it changed the way I thought about creating software.

  1. Do you mentor other developers? Or did you ever have a mentor when you started programming?

Yes, I mentor others – I’ve done so as part of a paid engagement and I occasionally just offer unsolicited advice to those I think need it. People ask me to help them and if I can, I do. That doesn’t mean I’m going to write half their application pro bono, but I answer questions and suggest things to learn or try. I’ve been the happy recipient of a great deal of marvellous advice from friends and peers, folks who were a little further ahead on one aspect of all the huge difficulty that is being a developer, and would tell me things I needed to know or introduce me to the right people. I try to do the same for others as often as I can.

    1. If you do mentor others, how did that come about? Do you do face to face mentoring, or do you do electronic mentoring?

Because I live in the middle of nowhere, most of my advising is not in person. I have had regular Skype calls with those who I am advising, and that works really well. Sometimes people email me their questions, or even message my public Facebook page, but the nice thing about Skype is I can see their screen, or show mine, while we’re talking live. That’s generally a lot better than email or other kinds of asynchronous messaging.

Then again, some of my most valuable advice has been given in restaurants and pubs. There’s no need to be in the same place if I’m explaining C++ syntax or architecture or “good design”, but career advice, soft skills things like dealing with difficult people or knowing if you’re charging enough for your time – that works better when we’re in the same place and relaxed. It’s one of the great things about conferences and other in-person get-togethers – a chance to give and get advice, or to listen to other people’s advice sessions.

  1. Finally, what advice would you give to someone is looking to start a career as a programmer?

Be prepared to keep learning your whole life. Be prepared to spend a long time learning something, to use it for a while, and then to see it become useless. Don’t fight that, move to the next thing. Watch for the big architectural and people lessons that still apply even when you don’t work on that platform, in that language, or for that kind of business any more. Hold onto the wisdom you build up, while realizing you still need to learn new knowledge (language syntax, tool use, platform idiosyncrasies) every day.

You can learn from online courses, from working on a project on your own time, on the job if you’re lucky, from just trying things and then frantically Googling when they don’t work. You can combine dozens of different ways of learning things, getting unstuck when you’re stuck, and realizing when to give up and start over. We all feel stupid from time to time, that doesn’t mean we really are. (If you’re working with the pre-release of something, you may have found a bug – I’ve done it and most of my friends have too. It isn’t always you who’s wrong.) And we all have to start over – new languages, new tools, new teams, new platforms – from time to time. If you know how to learn, how to start at something and recognize where you can use things you know from before, how to ask the right questions and how to make sure you don’t have to ask the same question twice – you’ll be doing very well indeed.

Oh, and sleep when the baby sleeps. Do not forget that. In the larger sense, that advice applies even for people who never raise a baby. There are times in your life when there just isn’t time to do everything, so you have to do the most important thing whenever you get the chance. Don’t waste time doing the second most important thing if there’s a good chance you won’t get another opportunity to do the most important thing. When you have a new baby, that means you don’t tidy during naps – your most important thing is sleeping and you do it whenever you can. When you’re writing software, there’s never enough time for everything. If you spend time doing less important things, you may never get to do the most important ones. That’s a disaster. Know your priorities and don’t skimp on what’s most important when there isn’t enough time to go around – which is most of the time, to be honest.

Let it go….let it go…

I’ve not become a fan of Frozen I promise.

During the last few weeks, I’ve been reading a LOT of code. Mainly other people’s code, and something struck me.  There was a LOT of commented out code. And this code was checked in to version control too.

I may be approaching this from a simplistic point of view here, but I can’t fathom why people comment out code, and then check that code in.  I’ve heard a variety of reasons, some of them good, some of them, not so good.

But I think I’ve nailed it down to this. Fear!

There’s a fear that we may need the code again at some point in the future.

However I would counter that with the following. If it’s not needed right now, then why is it in the code base? And more importantly, why is it checked in?

Commented out code CAN be useful such as providing an example of using a complex API or such. However in this instance it was code that wasn’t going to be executed as the developer had found a better way of doing it.

So it begs the question, why is the code still there?

I can understand if we weren’t using version control, or if the server was flaky etc, but the servers are fairly robust these days, and besides there’s redundancies as well.  So again, why is it there?

I would say that we should be merciless with commented out code. I’ll confess that I am in code reviews. It not only distracts the developer maintaining the code, but also disrupts the flow of the code when you’re reading it too.

We shouldn’t be afraid of removing commented out code from projects.  That’s what version control is for.  If we need that code again, we can easily get to where the code existed before being removed.

Also, we shouldn’t be afraid of removing code that’s no longer in use!  I did exactly that a few weeks.  I was working on a legacy product, and there was a section in the project that I didn’t think was being executed.

I grepped for the class names, and found they weren’t being used anywhere else. Once I’d done that, I removed their entries in the Maven pom files and tried to compile.  It compiled fine without issue, and I tried to run the product, and that ran fine too.  So after that I removed the directories and their contents.  Repeated the build and execute steps and functionality wasn’t compromised.

Sometimes the best things we can do for our code, is to actually remove code. Whether it be commented out code, or code that’s never actually executed.

 

When a talk goes badly…and a lesson from the Space Station…

A month or so ago, I gave a talk at the ACCU Oxford user group on unit testing threaded code. And a long story short, it didn’t go all that well. Initially, I was quite disappointed, and frankly felt like an idiot. The drive home in the car was quite a solemn affair, my friends who’d come with me were being supportive, but that didn’t change the fact that I’d basically delivered a turkey.

I’ve been mulling this over for the last month, and one of the books that helped me a lot was Chris Hadfield’s An Astronaut’s Guide to Life on Earth. And one of the points he makes in the book is to view negative things as an opportunity to learn.

Until recently, I didn’t cope will with negativity. My friends will agree that I’m very harsh on myself and I take things to heart far too easily. So having read Chris Hadfield’s book, I was sufficiently challenged, and decided to take a different approach to this. 

So whereas before, I’d have not quite imploded but being glum about it, I decided to view this as a chance to learn. So here we go.

Lesson 1 – Know your subject matter inside out.

Sounds like a no-brainer doesn’t it?  But I’d chosen a very complex topic. Probably too complex if I’m completely honest with myself, and what’s worse is that I said it in jest and all of a sudden I was committed to doing it.  So lesson 1a, is don’t jest Winking smile 

But I digress.  I’m passionate about writing unit tests for code, I think it’s something we should do as developers, no matter what the language we’re coding in. For example at this moment, I’m researching using a PHP unit testing framework. 

But unit testing threaded code is hard.  VERY hard as a matter of fact. And there was a perfectly good reason Google didn’t return much results, and that’s because it’s not something a lot of people do. 

But worse than that, I wasn’t all that familiar with threads either. Sure I have a rudimentary understanding of them, and know enough to be dangerous.  And frankly when presenting to a room full of experts, that’s not good enough. It’s bordering on disrespect, so to those who did attend and read this, I’m sorry.

But if you’re going to give a talk on something, make sure that you are fully versed and confident in what you’re going to say. I wasn’t and it showed. Badly.

Lesson 2 – Don’t use an IDE.

Another mistake I made was to have my code in an editor. Not so bad maybe, but unless you plan on doing live coding demo’s then put it on a slide. That way you’re facing your code, rather than craning your neck around to see what you’re doing on the projector.

I found I was doing this a lot, and lost my place a few times because of it. It also involved me dropping out of the presentation software, and in to the code window, which meant a lot of fiddling and faffing about.

So put your code on the slides unless you are doing a live coding demo, or doing a practical workshop or something like that.

Lesson 3 – Keep it simple

We all like to look cool don’t we with our snazzy slides and clickers. But having chatted with Hubert who is a veteran speaker and trainer. He advised that I simply put my slides in PDF. 

And that’s good advice. Imagine, you turn up to give your talk, and your laptop goes bang. What if the laptop you borrow doesn’t have PowerPoint?  That’s when a PDF is damn handy, especially when you have it on a USB stick.  You can give the talk on virtually any computer with a PDF file.

Also Hubert advised that rather than use expensive clickers, I should use my mouse to travel through the slides which is just as effective.

Basically keep it as simple as possible so that nothing overly complicated won’t break things.

Lesson 4 – Seek feedback, the good the bad and the ugly…

After the talk, I felt quite embarrassed, but I knew that if I wanted to get better, I needed to hear the feedback, no matter how bad it was. And I was given it. (Hence this blog post)

Sometimes the feedback you get isn’t what you want to hear. And I’d argue that sort of feedback is the best type.  But it’s VITAL that you don’t take that personally. It’s not a personal attack! It’s an observation, and after all if you’ve asked for feedback whether good or bad, then you’ve got to have the stones to be willing to receive the bad as well as the good.

And it’s vital you act on it. Otherwise you’ve wasted the time of the person who was kind enough to give you that feedback. The further consequences then would be that they’ll notice you didn’t act on it, and they’ll refuse to give you feedback in future.  And in doing so you’ve cut yourself off from a chance of improving.

Lesson 5 – It happens…

Sometimes, talks just don’t seem to go that well. You could have an off day or something which caused you to lose focus when giving the talk.

And sometimes, that’s how it is. A thousand and one things can cause a talk to nose-dive. The important thing is to recognise this, and not let it get you down.

Lesson 6 – There’s always next time.

You’ve delivered one bad talk. In the grand scheme of things, so what? It’s not life and death.

When Chris Hadfield went up to install the CanadaArm2 to the International Space Station in 1995, he had an irritation in his eye while he was carrying out a space walk to install the arm. His eye starting watering and stinging pretty badly, and in space there’s no gravity for the tears to run away from the eye, it just stays there building up. Right up until it goes over the bridge of your nose and starts messing around with your other eye.  On earth that’s no big deal, but in space that’s pretty serious.  You can’t exactly open the visor and give your eyes a rub as that’ll end badly for you.  Eventually his eyes cleared up, and he finished his work.

But my point is, that one bad talk doesn’t mean the end of your career or anything that dramatic. Sometimes it just doesn’t work out the way you’d hoped.  And that’s ok. 

As long as you learn where you went wrong, then you can smack it out of the park at the next talk you give.

Lesson 7 – Attitude…

Something Chris mentions in his book (I’m not on commission, honest, but it is a great book!!).  When the Soyuz is in orbit, or any solar panelled space vehicle for that matter, it has to turn it’s solar panels towards the sun.  This is known as the vehicle’s attitude.

And the same is true for us too. Our attitude dictates how we respond to certain events.

I’ll admit my confidence did take a knock after this talk.  But that was a matter of attitude on my part. I could chose to let this crush me, which is what usually happens, or I could decide to use this as a learning experience. 

That gave me a fresher perspective and the chance to learn from my errors, rather than risk repeating them again. I can’t recall who said it, but “those who fail to learn from the mistakes of the past, are doomed to repeat them.”

Lesson 8 – There are positives in there somewhere too..

So my talk bombed, badly frankly. But what it did do, was spark an interesting discussion on how we could go about writing unit tests for threaded code, and the techniques that could be utilised.

So even if the talk doesn’t go well, some good can come out of it. And we should remember that as well.

Conclusions

So there we go. I’ve learned a lot from things going wrong, and often times that’s when we learn the most. I have to say that the people at the meeting were very kind and supportive in their feedback, even if it wasn’t what I wanted to hear.  (They were far too nice!!) But they were also willing to give support and encouragement.

Initially I wasn’t going to do another talk this year after that one bombed so badly, but I’ve since been encouraged to do another lightning talk, so we’ll see if I’ve really learned my lessons.  So watch this space.

ACCU Conference Retrospective

I did plan to write a series of blog posts at the end of each day of the ACCU Conference in Bristol, but between having far too much fun learning, and having zero energy by the time I got home, never mind zero brain power, I thought I’d do a write up in one big posts with some “Match of the Day” style highlights.

Day 2 (Tutorial was day 1 for me…)

Day 2 kicked off with the ever energetic and lime-green nail coloured Pete Goodliffe bouncing around the stage giving a talk on how to become a better programmer. He covered various aspects of becoming a better programmer, but the main point I think he made was attitude. Our approach to becoming better. This challenged me (yet again! Thanks Pete (: ) because while I’d been moving forwards to becoming better, my attitude at times sucked!  Frankly.

Pete also issued a challenge to us all.  We were dared NOT to go to the stuff we’d be comfortable with, but to stretch ourselves and head off for talks that were way outside our comfort zones, and attend as novices. (Wasn’t hard for me, I was a novice at most talks!)  That way everyone would leave the conference having learned something they didn’t know before.

After being filled with coffee and all sorts of pastries and such, it was on to Seb Rose’s talk on Low fidelity approaches to software development. He spoke about how we tend to bite off more than we can chew, working on something that’s almost ready and will stay like that, and never get delivered.  He made a quick point on the waterfall and the biggest issue with it, was that everything feels ok until all of a sudden bang!

Seb also made the point that feedback is a very important thing to make use of, because it allows us to see where we’re at at the moment. It also gives us a chance to take stock of what we’ve done, and what our customers think. And pointed out that the Military use the Plan – Do- Check – Act which gives feedback that allows us to change the next part of the plan.

He quoted Father Ted, the scene involving Ted explaining perspective to Dougal making use of a toy cow to explain that this one was close, but the ones outside were far away.  And it’s the same with software projects.  Until we start working on something, we can’t understand the full scope of the problem we’re working on.

From there it was on to Mike Long’s talk on How To Talk To Suits, where I learned a great deal about speaking to business managers. And I thought I already did this pretty well, as I don’t like bamboozling people with technical jargon. This also included a practical workshop to work through which was good fun.

Mike spoke excellently on this, and talked us through a bunch of business clichés, such as a Business Case, where we need to present to someone for an allocation of resources for the stuff we want to do. Time is money, Mike pointed out that from a business perspective, Time matters more than money, indeed, I’ve often come across this in my career up to a point, “we don’t care how much it costs (within reason) but we must have it by tomorrow”.

Mike also used real world examples to explain that money comes in many flavours.  That is to say that if we wanted a new server for example, and our hardware budget was fully allocated, then we could STILL get the money but from another budget, such as an innovation budget for example.  Hence the term that money comes in many flavours.

There were some great lightning talks as well, and Chris Oldwood did some stand up which was great fun, and it’s a shame it wasn’t recorded, as there were some belters, who knows he may pop them on Twitter.

Day 3

Day three kicked off with frankly an amazing keynote talk from Axel Naumann on how CERN use C++. And it was awesome to hear how C++ was used to process data from the Large Hadron Collider. But what was epic for me, was to see some of the stuff that the LHC produced, and the fact that it was all completely open.  Axel also spoke about an experiment they’d carried out where they fired neutrons at nuclear waste that shortened it’s half-life but also generated energy!

Then there was an interesting talk from Kate Gregory and James McNellis on Modernising Legacy C++ in which they raised some excellent points. They made the point that we should compile C code as C++ as we’ll get better type checking.  They also made an excellent point that the warning you ignore isn’t a warning.  So Kate and James suggested that what we can do to modernise legacy C++ code is to do the following:

  • Increase the warning level and compile as C++
  • Rid yourself of the pre-processor.
  • Learn to love RAII (Resource Acquisition Is Initialisation)
  • Introduce Exceptions but carefull.
  • Embrace Const
  • Cast correctly

After that, there was an excellent talk by Chris Smith and Mark Upton from Redgate on what’s wrong with sprint retrospectives and how to fix them. This was a very practical talk, where we there was a fair bit of user interaction where we shared our experiences of working using retrospectives. They shared their experiences at Redgate in improving their retrospectives and had a lot of good ideas I plan to put in place at work.

Day 4

We were treated to a great opening talk where Alison Lloyd went through some case studies of various mistakes made in industry and what we could learn from them. I was scared that there’d be photos of Therac 25 victims initially, however there wasn’t anything like that.  Alison started her talk with a sobering discussion about diarrhea and how it caused so many deaths. It was quite educational and challenging to hear the devastating effect that this had on humans, and HOW it caused so many deaths as well.

This talk was running through my mind for most of the day if I’m honest, and I’m pretty sure I’m not the only one who was challenged and moved by what I heard in those opening 15 minutes.

Day 5

I had the unfortunate experience of turning up to a talk, and needing the loo, popped out for a second or two.  When I came back the door magnet engaged, so I couldn’t get back in. I didn’t want to knock either as I didn’t want to disturb the chap giving the talk.

However, Anthony Williams’ talk on C++ Atomics was very good.  I didn’t understand all of it, but I certainly got the gist of what was being said.  Essentially, don’t use atomics unless you have to.  You should only use them if you REALLY need the performance gains it will give you, and even then you should only use the memory_order_seq_cs (Sequential Constant) as the others are horribly complex, and you should only use them if you REALLY know what you’re doing.

There was a spare slot on the Saturday, so I volunteered to fill half of it with another chap, the only issue was, Roger and Kevlin were also speaking, so I knew that most if not everyone would be at one of those two talks, so I expected nobody to turn up to the talk I gave. But two guys who’d turned up to the previous talk stayed.  So rather than me stand up and do a talk, I made it more informal and turned it in to a chat with slides which I think quite well. I’ll be honest, I knew there wasn’t going to be many folks at this one, but it was a good way to see if I could do this conference talking thing.

The End-note was Chandler Carruth from Google talking about how they’ve made C++ safer, easier and faster with their use of Clang. It was a great talk with some live code demoes as well. He told us how Google had a completely unified codebase which allows for a single unified build system.  Chandler also spoke of the things that made C++ safe and quicker as well but I wasn’t quick enough to grab these as notes, the slides should be available though.

Conclusion

I’m not sure if it was me or not, but this conference felt different to me. It felt like I connected more with the material and the topics, and that could well be due to the fact that I’ve developed as a programmer since the last conference. But there was a pleasing and friendly atmosphere this year, not that there wasn’t one before but it felt more tangible this year at least for me.

I also had the chance to make new connections as well as catch up with those I made connections with last year, and a lot of people came up expressing an interest to be interviewed for the CVu magazine, and those I approached to ask were equally as nice.

I was sad to hear that Jon Jagger who’d been conference chair for the last four years had stepped down. I’m certain that I’m not alone in saying that Jon’s arranged an amazing conference this year as he has done for the last four years, and the fact he got a full minute of applauds and cheers speaks volumes for how good a job he’s done. I do look forward to hearing him speak next year though (:

However Russel Winder is the new conference chair, and I know that the conference is in safe hands.  So I’m excited to hear of what’s going to come in 2016, I may even put in a paper this time Winking smile

If you want to see what went on and who said what, the slides are available at the ACCU website, which can be found at http://www.accu.org and if you like what you see, then consider becoming a member 🙂

ACCU Tutorial Day 1 Review

So it’s April, which means the ACCU Annual Conference is here, and it was off to Bristol with me to attend the pre-conference tutorial. The original tutorial I’d registered for had been pulled, as the speaker (name here) has been seriously, so I wish him a speedy recovery and hope to hear his tutorial next year if he gives it. So I had to change my tutorial to attend, so I chose Kevlin Henney’s Raw TDD.

It was an excellent talk, the first portion Kevlin presented on the various facets that make TDD what it is, as well as define what TDD is and what it isn’t.

After lunch,using Jon Jaggar’s excellent cyber-dojo, we practiced developing using a pure Test Driven Approach. And Kevlin was very strict on this as well. And it was quite a challenge. I must confess I thought what I did at work day to day was TDD, but it turns out I have GUTs, (Good Unit Tests) rather than use a pure TDD approach.

After we’d switched coding partners a couple of times, we started to write our own Testing Framework. Now there are many excellent frameworks out there, so Googlecode has nothing to worry about at the moment, but it was awesome to see how relatively easy (albeit with a fair ammount of knowledge) it is to write your own unit testing framework.  Ours was based off the assert test and we built upon it.

All in all, it was a very enjoyable day of learning, and as ever Kevlin’s style of delivery was it’s usual energetic, engaging and enthusiastic self.

As a side note…

If you’ve never been to the ACCU Conference before, I’d strongly recommend going. It’s a world-wide gathering of C++ programmers, and in the past we’ve had talks given by Bjarne Stroustrup, Scott Meyers, Uncle Bob Martin, Michael Feathers and many others besides.

Also consider joining the ACCU.  It’s a great organisation, and it doesn’t cost all that much to join. And it has an awesome community of people. And if you’re a member you get a sizable discount on the conference, so double bonus (:

Out with the new!

Before I became a C++ developer, I wrote a lot of Java, and I mean a LOT. When I was unemployed I wrote a point of sale system, a stock management system a web servlet app. I wrote pretty much everything in Java back then.

Then I learned C++, and I didn’t know that you could write C++ in a Java fashion.  Indeed my mentor at the time saw some code I wrote and just like a Java developer I put everything in one file within a class.

And wouldn’t you know it, now I’m the mentor and I’m finding my mentee is doing the exact same thing. And some may say why is that bad? It all has to do with a simple word. And that word is, new.

In Java if you wanted to create an object, you’d do something like this:

static void main(String args[])
{
    Person fred = new Person();
    fred.doSomething();
}

This looks all nice and dandy doesn’t it.  You simply create a new person object, and then forget about it. Now I want to point out that in Java, this IS NOT a bad thing. Java has a special mechanism to deal with this called Garbage Collection, which is something that runs periodically and checks for objects that have not been used, or since gone out of scope and deletes them.

However, C++ doesn’t have this feature. (Well not as such, but it’s beyond the scope of this blog post). C++ is a very powerful language, and it trusts that you know what you’re doing and this is true of object management.

C++ doesn’t clear down objects for you, you’ve got to do that yourself. In C++ for every, yes EVERY object that’s created with the keyword new, there MUST be a corresponding delete for it.

So consider the following code:

int main()
{
    Person* p1 = new Person();
    p1->doSomething();
    // now we're done with the person object.
    delete p1;
}

The above code not only creates a person object, but also deletes it.  An important thing to learn also is to delete things in the reverse order you create them. It’s considered good practice, and can prevent possible undefined behaviour.

The other thing new does, is that it creates your object on the heap. “So what?” I hear you ask. Well, let me explain a little about why this isn’t ALWAYS what you want…

A very quick note on stacks and heaps…

When you write a program, it can be stored in one of two places in memory. The stack, and the heap.

The stack is memory that’s allocated for a thread of execution. So for example, when your function is called, a block is reserved on top of the stack for local variables and some data pertaining to your function. When the function returns or hits the last brace, the block is then freed automatically and can be used for another function. Imagine if you will a stack of plates in a cafeteria, a memory stack is identical. Last one In, First one Out otherwise known as LIFO. So the most recently used block is always going to be the one that’s freed.

Stacks don’t tend to be that big, so you wouldn’t want to put a 512Mb vector on there for example. That’s what the heap is for.

The heap is a chunk of memory that’s allocated for dynamic allocation. There’s no enforced pattern like there is on the stack, on the heap you allocate your data pretty much where you want, as long as it’s contiguous space big enough to hold your data.

This is where your data is put when you use the new keyword in your code, and if you don’t call delete on your new’d object, then it will stay in memory thus swallowing up some resource.

So what are the options?

Now that we know where new’d objects go, what are the alternatives?  Well you can create stuff on the stack, however as previously mentioned, the stack isn’t as big as the heap. So we must be judicious in where in memory we place our objects. So for example, if you have a class object that you know is going to have a massive vector of objects in it, then place it in the heap using the new keyword.  Otherwise, put it in the stack.  One of the main benefits of this approach is that you don’t have to worry about deleting it, because as soon as your function hits the last curly brace, it’s popped and released from the stack.

To place something on the stack you’d do the following:

int main()
{
    // let's say that we have a person that takes a string as it's constructor
    // argument, you'd declare it thus:
    Person student("Joe Bloggs");

    // For a default constructor with no params you'd do thus:
    ConfigurationManager config;
}

Both these place objects on the stack rather than the heap.

Another option available to you, if you use a modern compiler (and why shouldn’t you?) is the use of smart pointers.  These have been around for some time now, but they almost (I’m not 100% sure) act like an object created with the new keyword in Java in that the second they go out of scope, their destructors are called and the object is destroyed. Now while this may sound similar to garbage collection, I’m reliably informed that it isn’t.

However there are a number of smart pointers that can be used.

unqiue_ptr is a smart pointer that holds sole ownership of an object via a pointer and will destroy it once the pointer goes out of scope. You can’t have a uniqe_ptr pointing to two objects. The code sample below shows a very basic demo of unique_ptr.

class someHugeObject
{
    // a whole bunch of functions in here....
public:
    void someFunction();
};

void doSomethingWithObject()
{
    std::unique_ptr<someHugeObject> huo(new someHugeObject());
    huo->someFunction();
}  // then huo is deleted when we get to this line.  Even though we've used the new keyword.

shared_ptr is, as the name suggests a pointer that allows you to have multiple pointers pointing to the same piece of memory. So you could have many objects sharing the same block of memory. But unlike a unique_ptr, the object is destroyed under the following circumstances:

a) The last remaining shared_ptr owning the object is destroyed or b) the last remaining shared_ptr that owns the object is assigned another pointer

std::shared_ptr<Person> p1(new Person("Fred Jones")); // we create a new person
std::shared_ptr<Person> p2 = p1;  // now p2 has access to the person in p1

// so if we do:
p1.reset();  // the Person will still exsist because p2 is still pointing to it.
p2.reset();  // the last reference to the memory block has gone, so it now gets removed.

So in conclusion then, I can still hear my mentor’s words in my ear when he told me that I should never use new. And looking at this, you could say there’s a strong case for not doing so.  However I’d like to say:

Use the stack when you can, but when you need to, use the heap.  Also don’t be afraid to play, spool a VM, have a play, cause a stackoverflow and see what happens. It’s the best way to learn.

Happy coding people Smile

An interview with Kevlin Henney

I recently got the opportunity to do an e-mail interview with Kevlin Henney. He is a well-known author, engaging presenter, and a consultant on software development. He was the editor for the book 97 Things Every Programmer Should Know, and has given keynote addresses not just at ACCU but at other conferences as well.

How did you get in to computer programming? Was it a sudden interest? Or was it a slow process?

I was aware that computers could be programmed, and the idea sounded interesting, but it wasn’t until I was able to actually lay hands on a computer that I think it occurred to me that this was a thing that I could do myself.

What was the first program you ever wrote? And in what language was it written in? Also is it possible to provide a code sample of that language?

I can’t remember exactly, but I suspect it probably just printed “Hello” once. I strongly suspect that my second program printed “Hello” endlessly — or at least until you hit Ctrl-C. It was written in BASIC, and I strongly suspect that it was on a UK-101, a kit-based 6502 computer.

These days I am more likely to disavow any knowledge of BASIC than I am to provide code samples in it — but I think you can probably guess what those examples I just mentioned would look like!

What would you say is the best piece of software you’ve ever written? The one you’re most proud of?

Difficult to say. Possibly the proof-of-concept C++ unit-testing framework I came up with a couple of years ago, that I dubbed LHR. I don’t know if it’s necessarily the best, but it incorporated some novel ideas I’m proud of.

What would you say is the best piece of advice you’ve ever been given as a programmer?

To understand that software development concerns the management of complexity.

If you were to go back in time and meet yourself when you were starting out as a programmer, what would you tell yourself?

As a professional programmer? Don’t worry, it’s not all crap. As a schoolboy? Yes, it really can be as much fun as you think it is.

Do you currently have a mentor? And if so, what would you say is the best piece of advice you’ve been given by them?

I don’t currently have anyone I would consider a mentor, but there are a number of people I make a point of shutting up and listening to when they have something to say.

You are well known for giving excellent talks on various topics to do with Software Engineering, I recall the one you did at ACCU Conference last year. How did that come about? And how scary was it to leave the security of a regular 9 to 5 job and go solo?

I worked as a principal technologist at QA, a training and consultancy company, for a few years. Training was part of my job role and that gets you comfortable with presenting and thinking on your feet. Conference presentations are a little different as the objective of a talk and the environment of a conference are not the same as a course or a workshop, but there’s enough overlap that practice at one supports practice in the other.

As a principal technologist at QA I enjoyed a great deal of autonomy and so the transition to working for myself was not as jarring as it might first appear. Meeting people at conferences also opened more opportunities than I had perhaps realised were available when I was associated with a larger company.

I’m not sure I could have gone straight from working for someone to being independent. Actually, that’s not quite true: I went from being an employee to being a contractor many years ago, but I didn’t find that fulfilling.

And following on from that, what advice would you give to someone who’s looking to go it alone?

Make sure you know what your motivation is for going it alone, that your expectations are realistic and that you have some work lined up!

I’m guessing you work from home, if so, how do you keep the balance between work time and family time?

A question I’ve wrestled with for years and still not one I’m sure I have a good answer to! I am, however, far better at turning off than I used to be, recognising that work time is an interruption from family time and not the other way around. As I travel a lot the work–family distinction is often reinforced by whether I’m at home or away, so I try to get more work-related things done when I’m away because it doesn’t distract from family. I notice that when I’m working and at home the context switch can be harder because the context is effectively the same.

How do you keep your skills up to date? Do you get a chance to do some personal development at work?

I attend conferences, I talk to people I meet (and people I don’t meet) and I read. I probably get a lot more breadth than depth, but I temper that by focusing on things that interest me — so I’ll freely admit to being more driven by interest than necessity.

I’ve seen that you contribute to the Boost libraries as well. How did you get involved in that? And what advice would you give to a prospective developer looking to get involved in such a project? Or any open source project for that matter.

My involvement came about primarily because of my involvement in the C++ standards committee and writing articles about C++. That said, although I have a continued interest in Boost, I am no longer an active contributor, having long ago passed maintenance of my contributions to others.

As for advice on doing it: if you think you want to get involved, then you should. It’s worth spending your time familiarising yourself with the ins and outs and mores of your project of interest, asking questions, getting a feel for what you can best contribute and how. If you’re a developer, don’t assume it’s going to be coding where you stand to learn or contribute the most — maybe it’s code, maybe it’s tests, maybe it’s documentation, maybe it’s something else.

What would you describe as the biggest “ah ha” moment or surprise you’ve come across when you’re chasing down a bug?

That good practice I ignored? I shouldn’t have ignored it. I don’t know if that’s the biggest surprise — in fact, it’s the exact opposite — but it’s the biggest lesson. There’s nothing quite like the dawning, creeping realisation that the bug was easily avoidable.

Do you have any regrets as a programmer? For example wishing you’d followed a certain technology more closely or something like that?

Listing regrets or indulging in regret is not something I really do, which I would say is no bad thing — and not something I regret.

Where do you think the next big shift in programming is going to come in?

Realising that there are few big shifts in programming that change the fact that, ultimately, it’s people who define software. We have met the enemy and he is us.

Are you working on anything exciting at the moment? A new book? Or a new piece of software?

There’s a couple of code ideas I’m kicking around that I think are quite neat, but perhaps more for my own interest, and a couple of book projects that have my eye.

Finally, what advice would you offer to kids or adults that are looking to start a career as a programmer?

Look at what’s happening now, but also look at what’s gone before. If you can figure out they’re related, you’re doing better than most.