Graeme Simsion is the author of Data Modelling Essentials, Third Edition. And one of the best-known authorities internationally on Data Modelling and Data Management. He was a key note speaker at the Australian Data Warehousing conference in Canberra and has previously been the key note at the American and European conferences. Has a reputation for challenging conventional wisdom and being an entertaining speaker. Graeme was the founder and CEO of Consultancy Simsion Bowls and Associates, sold the company in 1999 and now divides his time between data modelling, master classes around the world, research and consulting.
Greg Low: Introducing Show 10 with guest Graeme Simsion.
So our guest today is Graeme Simsion. Graeme is the author of Data Modelling Essentials, Third Edition. And one of the best-known authorities internationally on Data Modelling and Data Management. He was a key note speaker at the Australian Data Warehousing conference in Canberra and has previously been the key note at the American and European conferences. Has a reputation for challenging conventional wisdom and being an entertaining speaker. Graeme was the founder and CEO of Consultancy Simsion Bowls and Associates, sold the company in 1999 and now divides his time between data modelling, master classes around the world, research and consulting.
So, welcome Graeme.
Graeme Simsion: Thanks Greg.
Greg Low: Can I get you first up to just give us some background, how you got to be involved with data and data modelling, and how you get to be where you are today?
Graeme Simsion: Well it goes back a very long time I spent about 25 years or so working backwards through the system development lifecycle. Started off as a computer operator, and went into programming and at that stage database management systems were fairly new on the scene and it seemed like an interesting thing to get involved with. So I was in Colonial Mutual, I spent some time on the database administration team, eventually becoming the DBA there and being involved in quite a big project. While I was a DBA I had to work with data modellers, wasn’t always a happy experience, so I felt that I couldn’t beat them I should probably join them, learn something about that part of things. So I became interested in data modelling. About that point I went out and started my own consultancy which gradually grew over the years and I found myself moving back from data modelling, doing data management which was the bigger picture, to information planning, to business process design, business process re-engineering. Ultimately in the last three or so years, most of the consultancy work that I was still doing, because much of my time was still in management, was actually at the business level, some of it an IT component but some of it just straight out business strategy, business planning. Then after I sold the company I decided it was a good time to go back to university and do some research and I decided to go back to my roots and get involved in data modelling again. The last three or four years I’ve been doing research in data modelling, I’ve been teaching it, rediscovering some of the things I thought I knew about that subject. Does that give you a bit of a picture?
Greg Low: Yeah, indeed. That’s really good. I must admit just listening to you I was kind of intrigued, thinking back at the way people used to work, moving across from operations originally, but there was always a career path into programming and it just kind of struck me that seems to be an area that just doesn’t seem to happen anymore. People tend to be going into specialist programming roles. Do you think that was a good thing or a bad thing?
Graeme Simsion: I’m least worried about people going straight into specialist programming roles, although I have to say, when I went from operations to programming, I knew stuff that the other programmers didn’t know. I think the thing that I knew was I could see those tapes turning and knew what my programme was making happen. Was able to root what I was doing. Had some concrete and all through my career I have realised that people in general are not very good abstract thinkers. Unless they’ve got something concrete to refer back to often do some pretty silly things. Relevance of that to data modelling is I personally think a background in database design is fundamental to being a good data modeller. When people say to me we’ve just got this guy from the business to come and join us, he/she is going to be a good data modeller, what sort of training should we give them, get them to develop some simple databases and get a feeling for what these things really are so whenever you’re doing your data modelling you can see the concrete manifestation of what you’ve done. I think we might’ve wandered off the topic a bit. I think breadth in a career is a wonderful thing. Think learnt more from being away from data modelling, particularly away from data management than I learnt from being inside, another perspective on them. Particularly on data management. Most important area to me was data management stuff, where my experience in information planning and experience of strategic planning with managers convinced me that a lot of the theories of data management are just not workable in most organisations, which a lot of people outside of data management would be quick to agree with.
Greg Low: One of the things which quite intrigues me is the big disconnect between the universities’ modelling of how things would work and reality. Example. See people that would say model an invoice and in that they’ll have a product code but they would never have a description of a product. Would say often in the product table. Never think of things in terms of historical things, down the track the product description changes but you don’t want to look at an old invoice and suddenly see you’ve now sold a different product to what you did back then. The whole getting your head around the historical perspective on how models work?
Graeme Simsion: I think there are a number of issues that you touched on in terms of the disconnect between universities and practice. One of them is a straightforward teaching disconnect. The teaching largely works with very simple examples. Think all know that once you actually get out there in the real world, examples aren’t so simple. Difficulty with a lot of teaching is it doesn’t make you aware that the problems you get out there in the real world will be so complicated that the techniques that you have been taught will not be sufficient to cope with them. You can learn coding, for example, and to a certain extent the problems you strike in the real world are an extrapolation of, you’ve learnt the techniques and you’ve gotta apply to something bigger and it’s tougher, but the basic techniques are still sound. My belief is the way that data modelling is taught those techniques will not carry you very far at all in the real world. Have to learn completely new techniques. Different ideas. Don’t really teach data modelling at university, don’t teach database design, we teach knowledge of the syntax and the conventions and it’s like teaching someone the rules of chess and saying now go out and play. A lot of work still needs to be done.
Greg Low: How important is data modelling in your view? As a separate discipline.
Graeme Simsion: Let me answer the broader question first. I think that data model is the single most important part of a data centric information system specification. If you’re writing games it might be different but if you are talking about most business applications which tend to be built around a fairly substantial amount of data and programmers whose job it is to put it in take it out, manipulate it or so forth, then my view is data structure is the single-most important contributor to the quality of the design of that particular system. When we work with bad data structures we’re constantly coding around them, have to deal with the ugliness, and if you deal with a very good data structure then the programme code seems to sit nicely in its place. Where do those data structures come from? I would argue that the logical specification of those data structures, what you see as from table spaces, indexes, to the task of coming up with those is data modelling. Is it a separate discipline? It is arguably the most important component. Would you like me to lead into who does it?
Greg Low: Yes indeed. Who is the best person to do this?
Graeme Simsion: My argument that the skills required to be a good physical database designer and the skills required to be a good data modeller are relatively distinct. There is some overlap in the middle but the data modeller needs to understand A: the business and B: logical data structures. The database technician needs to be a person who understands the database management first and logical data structure second. The DBA is essentially a software-based person. A sequel server DBA should be able to move comfortably from one business to another and still transfer their skills fairly quickly. A data modeller should be able to move from one DBMS to another and transfer their skills fairly quickly. The data modeller has more trouble moving from one buisness to another where the database technician or DBA has trouble moving from one DBMS to another. Nature of business skill versus technical skill. Don’t think jobs are so big that can’t reside in one person. Possible to be an expert at both. Consider myself an expert data modeller and at other things in my life as well. Neither party should assume they have other skills without learning them in a professional sense.
Greg Low: That’s the case in all IT areas. Example people who can build websites also think graphic designer. Usually not.
Graeme Simsion: Yes, if you put them on a week or day course in design would go from knowing 10% to 30% and increase competency. Same comment about a DBA. Involved with coming up with logical data structures. If no data modellers around but know basic principles going to avoid making some gross bloopers.
Greg Low: What do you think are the most common gross bloopers?
Graeme Simsion: Bad choice of primary key. Number one fault across databases. Trying to identify something using data that may change over time. Not saying every key must be a surrogate key but bad choice of key causes more problems that cause the thing to be retired earlier than anything else I’ve seen. See other problems. Some from novices some experts.
Greg Low: On inappropriate key, session Joe Celko did in Dallas a few weeks back. An area he seems to differ from many people on. For what’s appropriate keys and not. His view not like a key with no natural meaning. Looks for keys that somehow relate to reality. Prefer a key, even in multiple columns, something a human can look at and say probably right or probably wrong. Thoughts?
Graeme Simsion: There’s keys and there’s identifiers. Not necessarily the same thing. Allocating a surrogate key does not relieve us of the problem in the real world, distinguishing between one instance and another, doesn’t relieve us making sure one instance in database corresponds to one instance in reality. On other hand, integrity of relational database and way it works relies, know problem if key not unique. Get similar problems if key is not stable. If have to change name of primary key, will have to be propagated across all foreign keys which used, including stuff archived, otherwise gotta write whole lot of code to deal with history. More fundamentally lose idea that when a key changes means got a new instance. Example. Insurance policy, could make whole bunch of changes even person’s DOB, but still concept of same policy. Certain point need to say change not possible need to cancel policy, issue another. Question, how show difference in database? My view, that change of key, way of saying one real-world instance being replaced with another. Not saying not carry meaningful information, just not make it your primary key if it’s not stable.
Greg Low: You see a lot of people would use identity columns etc, issue with that got all management issues, anything that’s automatically numbered like that when you start moving things around. Must admit he said anything like that where he sees like a sequential number, almost like a throwback, trying to emulate that. Must admit the guys I work with at other end of spectrum, want an identifier that’s stable and unique. Fact not relate not worry them.
Graeme Simsion: No, issue here is in first instance, not talking about relating anything in the real world. Actually talking about unique identifying a row and a table. Being able to organise internally in a database, that everything hangs together in a comfortable way which is not complicated to change, update, which doesn’t comply, multiple updates all over the place. Keeps things neat and tidy. Then you have to say need to mat this number I’ve come up with in an instance in the real world and I need a mechanism for that. That might be that I carry a value of some real-world identifier and there may well be a time-frame associated with that. All things we can do really well within DBMS. In a relational structure it relies so heavily on the primary keys and foreign keys that to do those badly leads to ugly coding and prematurely obsolescence. Surprised to hear Joe saying that. Perhaps arguing you do need to carry this extra information, the surrogate key won’t do it in its own right.
Greg Low: I might be mis-paraphrasing what Joe was saying, he was very keen on the idea of specific identifiers as primary keys he was talking about in terms of things that are physical real world type stuff. I have seen a lot of guys in that camp, and a lot of the universities I’ve seen, tend to lean that way as well. Think they’re thinking that an identifier that isn’t related to the real world is more of an implementation detail in the database rather than something that forms part of the model.
Graeme Simsion: That becomes two questions so let me take both of them. I think with university examples in particular they’re so contrived, so simplistic they’re not even addressing that question. You’ve got universities teaching, and we’ve deliberately decided to use a real-world identifier here rather than an ID then fine but I think most of the time they just pop in department ID or person’s name without even thinking. Simple example.
Greg Low: We had a guy apply for a job at the university who only had one name. I always wanted him to come work there because I wanted to see all our systems melt down.
Graeme Simsion: That’s a classic example of data modelling knowledge. The experienced data modeller brings to an organisation whose trying to put together an application that needs to keep names and say are you aware that sometimes people only have one name? Or it’s an international standard for name recording and it goes like this, etc etc. To me this is a necessary state of mind of modellers need to have and not necessarily database technicians. I think you can get an excellent database technician without knowing that but you do need if you’re developing big industrial strength applications. Need someone whose able to put their hand up and say the standard format for names is this but recognise that in some countries it’s not just question name, surname, it’s family name and this, some have only one name and blah blah... That to me is part of the data modelling knowledge. Let me pick up this other question of maybe something like a primary key is really an implementation decision and therefore isn’t a data modelling decision and I think that raises a whole bunch of questions about where does data modelling stop and where does database design or physical database design start? And in almost every organisation I’ve worked with there’s been disputes around that boundary. Giving advice to people about data modelling in your organisation and one of the issues is if you’re going to divide this task you need to have a very clear understanding of where the boundary lies. I have some views on where that boundary should be, I’m much stronger on if you don’t know where that boundary lies then you’re going to have problems. Data modeller specifier what the primary key is, you’re going to have a database administrator saying that’s my responsibility to nominate that, none of the data modellers business, override it, fail to take into account some piece of information that had the data modeller make that decision whatever it might be. My personal view is that primary keys are in fact a data modelling issue. At least in the first instance. My view is the data modeller delivers a complete logical database design for everything you see. May well be that the database technician turns around and says that key, not application wise but speed wise, might be the throughput, whatever. At that point the negotiation goes on about what can be done in a way that a builder might say to an architect listen I can’t build that. You ask me to do something unreasonable or have you seen the cost of those bricks? Etc.
Greg Low: One of the things I sort of wonder about is that a lot of the data modelling and things that I see and same with database design, it all tends to be not particularly dynamic. It tends to be almost like what we do in programme design that used to be like a waterfall approach, you tend to design things and then build them. It’s just interesting to note that nearly all the programming seems to be more and more moving to talk about agile approaches or things, where, often nowadays you’re starting to build things well before people have any real idea about what is actually needed and it’s only when they start to see things that they start to realise that, bits they haven’t told you and I just wonder if any of the tools we’re working with, I just look in terms of the database things and I just don’t see the same level of refactoring tools and all these sort of things that we do have in the programming world now.
Graeme Simsion: I think there’s a reason for that. Before we talk refactoring let’s talk about the agile approach. The agile approach is a descendant of the prototype, so all those approaches say, you know what they say, there’s an intrinsic problem here, that it’s easier to change code than it is to change database structures. I don’t mean that it’s hard to reorganise a database. The problem is the amount of collateral damage caused to everybody else. That’s the problem you say hey we have a much neater way to do this, we’ll change the shape of the database overnight and there’s going to be a mutiny on the part of the programmers because all the coding is going to be rendered unworkable. So the first thing is that in my experience with prototyping, RAD, and certainly I haven’t been involved in agile projects but I’ve had some interesting with Scott for example whose a bit of a leading light in that field. My picture on that is the same; you need to have a reasonably stable database design down early in the piece. Otherwise the programmers are going to working with two moving targets that is the business requirements and that underlying foundation. That’s usually too many unknowns, too many degrees of freedom happening in the project. Another thing that’s easy, but I think it’s doable, and it needs good data modelling skills, it needs people who can build structures which are reasonably generic and therefore stable, at the same time, reasonably straightforward the programme against, who have good pattern knowledge in the sense they can see how something might look they can anticipate even though all the requirements aren’t in. Then we get to refactoring and it’s the same thing that the change to the database is likely to have such impact elsewhere that it just makes the job harder, it just makes it a more complicated problem to tackle.
Greg Low: Do you think part of the need in shops where there is a constant need to keep changing the model, it’s also just a lack of skills on the people doing the analysis and design of the model in the first place?
Graeme Simsion: There’s two things that you want with a data model. I tend to call them stability and flexibility and I thought Larry English characterised them with a presentation a few years ago, stability is in fact the ability to deal with new queries, new uses of the existing data, without having to change the structure. Flexibility is the ability to accommodate new business requirements, new business data requirements with minimum pain. There are formal techniques you’re going to use to achieve both of those things. My view is if you want to have it in one or two sentences, you don’t want to build rules into a data structure that are likely to change during the life of the application. There are plenty of places you can put rules. You can put them in the data structure, data values, in code, or you may have them sit outside the application. They may be used as heads or documents elsewhere. The basic rule is if you build something in the data structure it’s gonna be hard to change because people will assume that rule and build around it. So if you’ve got a rule that’s likely to change during the life of the application, put it somewhere else. And that’s why you end up with table-driven applications and so on.
Greg Low: Ironically, another thing in Joe’s session, one of the things that he loves to see is a whole lot of constraints at the column level which are probably sanity checks on columns at the database level. Just interested in what you were just saying, is your thought that maybe that shouldn’t be there that it should be in a middle layer? And because of the difficulty in changing things?
Graeme Simsion: I think you take a pragmatic look at it, you say alright, if I put this constraint in at the column level, how much is that constraint gonna be, what will happen at that constraint changes, what is likely to be affected. And if it’s self contained at the column level or whatever, wonderful, but what’s probably more likely, that is just a final line of defence, and you’re expected to build a bit of code anyway. People think about where they put their constructions, they understand there’s more than one place to put them, they make a constant decision about where they put them based on the ease of changing and the likelihood that they will change.
Greg Low: One of the problems is if you do it at multiple levels you can easily end up with overlapping constraints or something?
Graeme Simsion: The enforcement in the business rules right through the system is a crucial part. Some people say only part an application is. An application is a substantiation of a bunch of business rules. Question, where are you going to put those rules? You put them in one place, or if you’re going to have redundancy, that redundancy is deliberate, it isn’t something that just arises. Clean design means the rule is held in one place, and if you’re putting it in to cross check it then that cross checking is part of the deliberate choice based on what you perceive as potential weaknesses in the application or whatever. Not just we did it twice because we thought it was a great idea on the day. Or that’s the way that programmer works, he always checks this.
Greg Low: The question also is the real preferences to what level does that actually live and I suppose one of the views that they’ve often had is if you put it in at the bottom level like the database level, they’re thinking no matter what the client is at least you’re not gonna get around it.
Graeme Simsion: Absolutely. Very strong enforcement of a business rule. If an insurance policy can only have one customer, you’ve only got one customer number on the insurance policy you’re gonna need some very ugly coding or very hard work to get around to give someone two customers. Gotta policy two customers. Really built in an intrinsic way to the application. Which is great. I mean it also makes life simpler if those rules are there, they’re in a central place, structures tend to look natural. If that rule changes you’ve got a very ugly situation on your hands. The rule in general is if you can build the rule into data structure in a natural way, because some rules are not amenable to be put in data structure, and you’re confident it won’t change then that’s where it goes. In many ways what data modelling is about is finding those rules which can serve as the foundation of your application. What are the rules that we think are stable enough that we can rely on let’s build those into the way we shape our data and let’s build them at foundation.
Greg Low: So I suppose then you have got the discussion, still the whole thing is as to where the DBA starts and stops and where the modeller starts and stops in that sort of regard and I suppose also the fact that a modeller would tend to then have to be across the enterprise not applications specific.
Graeme Simsion: Well not necessarily. I think it’s a good thing for modellers to be across the enterprise because they can see what’s being done in context. Got an idea of the questions of stability and so forth because these are typically, the application is going to be expected or at least the database is going to be expected to outlast the users that you’re talking to. The data modeller is going to have to inform themselves as well as they possibly can about what is going to happen to those business rules over the expected life of that database. And I think the crucial thing you need upfront is a statement of what the expected lifetime of this application is going to be. Certainly something that the data modeller needs to be qualifying what they do with saying you told me it was a seven-year timeframe, I’ve done my best to asses, but if you want to run it for the next fifteen then who knows. It’s getting that sort of thing into place. This basic thing of where does the data modeller stop, my view is the data modeller’s responsibility’s to use the conceptual schemer. Which is to say is if you can see it, it’s the data modeller’s responsibility. But if you can see it, the base tables, if you like, it’s the data modeller’s responsibility. I believe what the data modeller delivers is a default set as it were base tables. Then when you start talking about performance, come up for negotiation, fair enough, but that’s what they deliver, and any change to those for performance reasons, the data modeller is key party because they’re the person who’s made the decisions to put that in place in the first place.
Greg Low: that might be a good point to take a break for a few minutes and we’ll be back after the break.
Greg Low: Welcome back from the break. What I might do for a moment Graeme is share anything you wish to about yourself and where you live and anything we get to know about you.
Graeme Simsion: Well, I said that I was doing a, living in Melbourne, Australia, and been associated with Melbourne University for a little while. In fact in the final throws of finishing a PhD so before you rang this morning I was threshing away at that. I’m only two and a half, three months away from submitting that depending on what my supervisor says about the last draft.
Greg Low: Congratulations, that’s really good. I must admit though I thought myself the real work started once I submitted.
Graeme Simsion: (Laughs) It’s certainly felt like real work over the last three or four years. But it’s been on the data modelling stuff so it was a real return to roots on that one. I’ve been looking at data modelling in practice, which most of the research has been using students as surrogate novice starter modellers. So we know an awful lot about how students do data modelling and very little about how practitioners do it. That’s been, really interesting in fact I went and spoke to the Sequel Users Group in Brisbane, I got them to fill out a little survey and benchmark how they approach the database design task with how data modellers approach their views of the data modelling task. I live in Fitzroy I’ve got a little bit of a share in a wine business and an antique shop and various bits and bobs around the place. Do the occasional bit of consulting to support myself and occasional raid over to the US or UK to do a bit of teaching.
Greg Low: What sort of wine have you got a small share in?
Graeme Simsion: Oh it’s a big share. It’s a 50% share. But it’s a very small business. We distribute Pinot Noir. We specialise in that, the good friend’s a wine buff and that was his dream. Used to be a data modeller and decided he had enough of that. So www.pinotnow.com.au is the website. He’s been doing very well thanks to not having a physical retail outlet which keeps the cost manageable. I just get to drink the stuff.
Greg Low: Kind of intriguing the number of people I see that are no longer in the industry and often that they really move to something completely different. Often have to check if that’s really where they said they were going. Drop out totally into a different field. I think it is a bit of an issue, tend to get a bit burnt out over long periods of times in this industry, and sometimes I think they just love the idea of not having to study and learn almost every day of their lives and drop out to something that doesn’t require that.
Graeme Simsion: Although I think it’s sometimes the people that don’t study and learn who need to drop out.
Greg Low: What are some of the other main bloopers that you see with models that people come up with? Apart from the primary key thing.
Graeme Simsion: I think you see bloopers at different ends of the scale. Novice type bloopers are everything from often very literal sort of modelling so whatever the user said better turn into an entity and so on and it may not be a very well formed entity. The worst models I see from novices are from people who’ve never been on the database side. So they’re actually drawing things that are sort of impressionists’ view of the world which are never really implementable in the database design and they’re typically the ones who say oh no conceptual modelling only goes so far. Then the DBA takes over. And the poor DBA has actually got the job of doing the job. Bit like someone saying I’ll sketch a plan for my house but I won’t have any responsibilities for things being structurally sound or workable that’s the job of the builder. And the builder takes a deep breath oh another person who’s designed their own house and didn’t even put anything to scale, etc etc. There’s a whole raft of things that come from people, a miss mash of business concepts which have no integrity of being the basis for database design.
Greg Low: I see the opposite end of that where I see OBJEC guys who view absolutely everything in some hierarchy and they look at the base object in the hierarchy and they think the way they’ll model that is that there’ll be one table in the database and that’s it and what they’ll do is add columns to the table for every possible attribute of all the descendant, or children classes or whatever from that and I mean that’s the opposite end.
Graeme Simsion: I don’t want to comment too much about OO design except to say that most of the time the specific information is going to end up in a relational database. And people say we don’t need data modellers because we’re using OO or whatever, I just say ok, if you’re gonna put stuff in a relational database, is someone gonna cast an eye over this ultimate relational database design, is it any good? Is it well formed?
Graeme Simsion: There’s another group of problems that I see which come from the expert data modellers. Sometimes expert data modellers have become insensitive to the needs of the applications people and even the users. What they tend to do, two things: one is they try to capture every rule in the data model without thinking about the idea that rule might live outside it. So the data models become very complex, very hard to ruin with. The alternative is they build very generic data models that can handle just about everything they’ve hardly got any business rules in them. Party role, party relationship, business agreement, all these sorts of very high level entities; and all they’ve done is shift the responsibility or rules to the applications people who are then going to have to build into the code. Often those sorts of models are incredibly hard to actually work with and practice.
Greg Low: On dotnetrocks Mark Miller was talking about when he looks at object design he has his own little set of rules in his head for telltale signs of things that need to be different classes. If he sees a class where somebody starts adding boolean properties, and often these properties quit change how the thing works, this is the sort of thing that makes him go hang on this probably ought to be two things. Not something with a split personality. Often see that in table designs and things where people end up with a table that some particular function and it will start adding a completely unrelated function to even the one column for example so it’ll be this column, you know, if this is an invoice this is actually an invoice number, but sometimes it’s not it’s one of these things over here. But it’s a single column.
Graeme Simsion: I think the classic example of that, particularly amongst people who have discovered generalisation for the first time, the classic thing is inappropriate generalisations. Things that just happen to have similar behaviour and suddenly someone has this flash of insight in the shower and says oh a personal agreement. They’re both things that we can change, let’s call it a changeable object. Any generalisation you’ve gotta look at the utility of it. Not just is this beautiful is this elegant, has some level of truth. As human beings we tend to see patterns, tend to see the similarities rather than the differences. Say ok similar in some ways but that doesn’t mean they necessarily have to be lumped together. Now, what are the differences? Here’s something I think people could do a lot more of in data modelling and I haven’t even got on my hobby horse today but my particular hobby horse is I believe that data modelling is design. It’s not a one right answer discipline and that shouldn’t come with any surprise that someone’s gonna come from a database design background, because they call that design, they say there’s more than one answer, but data modellers don’t always agree with that view and it’s a controversy you have in the data modellers community. If you come up with what you think is a pretty good data model I think it does you a lot of good to then try to come up with another one that’s different. Too often we get anchored and we don’t consider alternatives. We don’t consider there might be more than one way of doing it. We centre on one way so this must be the right answer, if it doesn’t work I’ve gotta throw my hands up in the air or I’ve got to fight and defend it. I was involved in a project not that long ago where I came up with a data model for someone. It was only a fairly small project and I thought about it, there’s another way of doing it which I don’t like very much but it’s probably the more obvious way of doing it, so I did the obvious model as well, the one a more experienced modeller would have come up with. Looked at both of them, put them in front of the user and we talked through all the implications and in the end we went for the second one, for the obvious one. I can see all the reasons, I didn’t feel hurt; that was the better answer to go with.
Greg Low: Pretty much brings us up to time. I know we’re in a little bit of a tight timeframe today but what I might do is thank you and say look, where can we see you or what’s coming up or what things have you got happening in your future?
Graeme Simsion: What’s coming up? Most of my engagements are in the States, I am doing an advanced data modelling class on the fifth of April in Melbourne. But aside from that if you want to know a little bit more about what I’m about and data modelling probably the best thing, and cheaper than an advanced data modelling class is to grab a copy of Graham Witt’s and my book. “Data Modelling Essentials”, which is doing very nicely for us over the last year or so since the 3rd Edition came out. Most people like the book because it’s readable; we go into a lot of issues that we’ve talked about. If you want a reasonably easy introduction it’s not too bad at that.
Greg Low: Thank you again Graeme, have a great Christmas and we’ll talk to you again soon.
Graeme Simsion: Thanks very much Greg. Cheers.
Greg Low: Thank you.