Kimberly Tripp is a SQL Server MVP and Microsoft regional director and has worked with SQL Server since 1990. Since 1995 Kimberly has worked as a speaker, writer, trainer and consultant for her own company SysSolutions, Inc. which is SQLskills.com. In speaking, Kimberly’s comments usually indicate she has the gift to discuss complex technical topics with ease and occasional humor.
Greg Low: Introducing show number 15 with guest Kimberly Tripp.
Greg Low: Our guest today is Kimberly Tripp. Kimberly is a SQL Server MVP and Microsoft regional director and has worked with SQL Server since 1990. Since 1995 Kimberly has worked as a speaker, writer, trainer and consultant for her own company SysSolutions, Inc. which is SQLskills.com. In speaking, Kimberly’s comments usually indicate she has the gift to discuss complex technical topics with ease and occasional humor. Welcome, Kim.
Kimberly Tripp: Thanks Greg, how are you?
Greg Low: Very good, thanks. Please start by telling us how you started in SQL Server at all?
Kimberly Tripp: So long ago…I worked at IBM. At IBM I was in marketing and advertising. Got the job through my math teacher in college and he gave an IBM guy my name and the IBM guy, Dennis, gave me a call and asked me to interview. Really strange how I started with IBM. What was that class? System 370 Assembly and it was my instructor from that class.
Greg Low: Posted a thing on my blog the other day. Came across things I was cleaning out with lots of instructions, things like HCF for “Halt and Catch Fire,” puns and joke instructions. 370 Assembly was certainly widely used.
Kimberly Tripp: Absolutely. Weird situation how I started at IBM. Oddly, even with my computer background I started doing marketing and advertising for IBM and I needed a database to track the orders I was taking from a nationwide advertisement. Oddly, I was in advertising, had to create a database to track my orders, how we all get started in the infamous order-tracking database industry, and loved the database more than the advertising and marketing. Another project came up with a different database, first one was in DataEase, then a Fox based project came up and I loved it. Next was OS2 SQL Server and I started working in Chicago doing training, education, consulting and became a Microsoft certified trainer in ’90 or ‘91. Joined Microsoft not long after. It was 1990 when I started working with SQL and by ‘91 I was working with Microsoft. The rest is history.
Greg Low: Intrigued about Fox in the background. What were your thoughts on Fox?
Kimberly Tripp: I think understanding all row based or X based languages is a great base to build on. It’s easier conceptually than set oriented databases, the infamous go to top for X=1 to end of file, do begin, etc. is very natural mentally. I enjoyed FoxBASE and it also allows me to bond with those who don’t understand set oriented databases because I didn’t come from that either.
Greg Low: Intrigues me because it’s a passionate community that doesn’t die, even with all the predictions that Fox will disappear. I did a speaking gig at OzFox the local Fox conference organized by Craig Bailey. It was great, a lot of passionate people in that community.
Kimberly Tripp: I think in general there are a lot of conferences and new technologies around X-base. I just haven’t looked at it in so long I would be lost myself. It is amazing, though.
Greg Low: They’re using SQL Server at the back end so it’s changing Fox somewhat as well. The SQL Server community is getting quite strong and there’s a lot of passion but in the Fox area there’s just a fervor that would be great to duplicate.
Kimberly Tripp: True. There are definitely diehards in that area. I always think it will slowly go away but it doesn’t and I don’t think it will for quite some time, surprisingly.
Greg Low: So what were you doing at Microsoft?
Kimberly Tripp: My first two years were with Microsoft University and I did training for them on the East coast doing SQL Server Windows for Work Groups and Land Manager. I didn’t do OS2 for Microsoft; I did do OS2 training from before I worked for Microsoft. At Microsoft I was just doing SQL and Windows for Work Groups, I think. Not even Land Manager. So Microsoft University and then they got out of direct deliveries so I started training the trainers. And as much as I like teaching, I like teaching people who will be working with the product and the people that have all the problems. When we started training trainers it wasn’t as interesting to me and that’s when I joined the SQL Server team. I was never a programmer in C but I did do testing and builds. I didn’t like testing because I knew only one area of the product well but not the whole product. Then I started writing in UE, user education, and for the last nine months I wrote the Transact-SQL manual and as much as I loved the folks and the small team, I didn’t like doing one thing. Maybe it’s my strange personality. I like writing, but not all the time. I like speaking, but not all the time. I love teaching. It gives me depth and I learn every time I teach and speak with customers. When I present something, what the students understand from it makes me see it in a different way and makes me learn different things. I love events, conferences. In the mid 90’s there weren’t a lot of technical marketing roles where you could do conferences. I suppose today there are a lot more at Microsoft, although I’m not really a marketing person either. I was young and I thought if I’m going to fall flat on my face, now is the time to do it. I didn’t have huge responsibilities so I started my own company. Ten and a half years later I still love it and it’s great.
Greg Low: Refreshing that you’re willing to have a go. Working at a local university I would talk to very bright students and I’d find they were going off to work at a programmer’s graveyard type place. I asked “why are you going there?” They would talk about security, and well, they’re 20 years old with no ties or commitments.
Kimberly Tripp: Except student debt these days. That’s just massive for these kids today. Ah, did you hear me? I just said “These kids today!”
Greg Low: This is when you say “I’m starting to sound like my mother.”
Kimberly Tripp: Thank you, thank you Greg. I think the interview is over now. Seriously, I think some students should take the plunge and really try to be on their own. But it does take an interesting person to have their own business. I must admit there were lots of ups and downs for me when I started. You don’t know where your next income is coming from; you have to pay for a lot on your own, insurance… I think security is a good reason but I wish some of them would find more interesting internships so they could get more jazzed about different technologies rather than going into the Cobol programmers graveyard. I can’t believe some of them are still doing that!
Greg Low: Onto the topic of the day. We’re going to talk about indexing. It’s an area with widely differing opinions and one that makes an enormous difference to performance on various systems and is something often done wrong. Maybe if we start in the SQL Server 2000 end of things. What are the main problems you come across? What do people get wrong?
Kimberly Tripp: You definitely chose a good topic for me. I can talk for days on indexes. I agree with you that there are a lot of misunderstandings and there are a lot of improperly indexed and even more so improperly maintained and managed systems regarding indexes. It’s also very important because it has a profound effect on query, stored procedure and everything else performance. I like to talk about locking and blocking but never by itself. I find people will see a lot of blocking problems and then they’ll see lots of locks and then they’ll think locking is their problem when most of the time locking is a symptom of an indexing problem. What happens from poorly chosen index decisions or even a complete lack of an index strategy is that you have negative side effects. Not just query performance but when a query is slow it holds locks for a longer time and that creates blocking. I don’t just mean a query, I also mean inserts, updates, and deletes. Those are going to cause you more locking and blocking problems but if they can’t process those optimally because the right indexes don’t exist then those locks are held longer and that exacerbates the problem. So it’s a great area to touch on and there are a variety of ways to look at it.
Greg Low: Good point. People often, when discussing indexing, think it will effect selecting and such. It doesn’t dawn on them if you’ve got to update a row you’ve got to find the row first. Many scenarios where indexing is just as critical in terms of update performance.
Kimberly Tripp: Yeah. I can take that one step further with inserts. There are two types of tables, with order and without order, and that’s simply the existence of a clustered index. It either exists or it doesn’t. Not all clustered indexes act the same way but what a clustered index does no matter what, is it defines insert location. Even just where the row goes helps improve insert performance over a heap structure which is an unordered table. Even insert performance can be sped up with a clustered index but there’s more to that. If the insert location doesn’t have space, if the page that you’re inserting into is full, then that’s a case where a split has to occur and that makes the insert more expensive. Long story short, inserts can be less expensive with a clustered index but you need the right management, maintenance, and creation for that clustered index to make it perform well.
Greg Low: I really enjoyed the session you did in Munich at PASS last year. One thing you discussed at length is choice of clustering key. What are your thoughts for the listeners?
Kimberly Tripp: Sure. I have this series of concepts that I look at called the clustered index debate which all focus around where you should create your clustering key. The clustering key is, first, if you’re going to create a clustered index what should the column(s) be on which that clustered index is created? The things I look for are not the obvious ones. A lot of people will say a clustered index puts your table in that order so you’re best off looking at your range queries. What is missed is that there are internal dependencies on the clustering key that SQL Server has that people don’t know. Non-clustered indexes actually use the clustering key as the look up key from the non-clustered into the base table, so that look up requires that every non-clustered row be unique. That means the clustering key must be unique to help uniquely look up the rows. But SQL Server doesn’t require that you have a unique clustering key—you know this—because SQL Server will uniquify the rows. If you do not create a unique clustering key, SQL Server will automatically make the rows unique by putting a 4 byte uniquifier into the values.
Greg Low: I love that word, a “uniquifier.” I think they made that up.
Kimberly Tripp: They did. I remember the first time I heard it, one of the architects on the SQL team was giving a lecture in 1996 in LA at the Sphinx workshop, a SQL 7.0 conference. It was Gertz Grapha, an architect on the SQL team for years, maybe it was ’98. I remember he talked about the uniquifier and I was in my seat, laughing a little, because the word was completely made up. Who made this up? It cracked me up. The uniquifier is an important concept. It’s something they have to add that takes time and space. Not only does it waste space in the base rows but it wastes space in your non-clustered index keys. It is a time and space waster. Not the best of choices. Of course it’s handling that situation when the clustering key is not already unique so they don’t have to make that requirement, so they’re more flexible. It’s always a trade off. To be flexible is a trade off versus performance versus simplicity and so forth.
Greg Low: I think the change into row level locking was avoiding hot spots and was an issue in 6.5 as well.
Kimberly Tripp: Totally. As a consultant I remember talking to customers about how they should create their clustered index prior to 7.0 and you’re spot on. It was exactly that. We recommended looking at your range queries to create a clustering key that didn’t have hot spots and what we were trying to avoid was page level locking but we used the excuse that we were improving query performance. It got clouded and ever since people still think that’s the reason to create a clustering key. But the internals have changed, we went to true row level locking, you don’t need to avoid the hot spots for a locking reason. The whole architecture has changed.
Greg Low: Now the converse, the idea of having all the inserts occurring on the page that is in memory is actually beneficial.
Kimberly Tripp: It becomes counter-intuitive. Unique was the first criteria. The next one you mentioned, the fact that if you have an identity column you get other benefits. I not only like unique I like narrow. I also like it to be static because if the clustering key is duplicated in all of your non-clustered indexes then if it changes there are a lot of changes that have to be managed and maintained across all of those non-clustered indexes. That leads us to a primary key which is unique and static, and if it’s not a natural key but a contrived or surrogate key, it’s probably also narrow. That may actually be an identity. Identities are unique, narrow, and static but they’re also ever-increasing which normally you want to avoid but now that we have true row level locking all those inserts going to the same page becomes a benefit not a negative. So those pages are already in cache, they stay in cache and there’s more isolated cache activity so you actually use less cache for inserts. Huge benefits. You can’t shut me up on this one.
Greg Low: I’m going to have to say that “GUID” word.
Kimberly Tripp: GUIDs! We all have games we play when we’re presenting and one I play is how long until someone says, “How about a GUID?”
Greg Low: Like if you go to any Visual Studio Team Systems sessions at the moment, it’s “how long until licensing comes up?” We had a code camp last year and I felt sorry for Charles Sterling. He had a presentation with 30 or 40 slides and he got to the second one that mentioned licensing and that was the entire session. Yes, sorry, the GUID question.
Kimberly Tripp: There are heated debates on those issues and how long they should last. It’s crazy. There is even a petition that was started. Anyways. There are two things at hand here, there’s the primary key and the clustering key and those two things don’t have to be the same. You can have a primary key that’s not clustered and you can have a clustering index on something totally different. That’s important to start with. I personally prefer when the primary key is your clustering key but if your primary key is a GUID then that may not be the best choice depending on how you’re incrementing your GUID, or getting it values. There are three ways to generate a GUID. There are two places: client side and server side. On the server side there is the function, NEWID, in SQL Server 2000. You can also write your own XP like my friend Gert Drapers who wrote an XP GUID that would generate GUIDs using an XP. If they’re client side you can never have a pattern to them because you’re not generating them in one location. If server side, you can use the NEWID function which is the only function in 2000 but that isn’t ever-increasing so it doesn’t give you the same benefit as an identity column would if it were ever-increasing. Can you generate GUIDs sequentially on the server side? In 2000, if you create your own XP or use Gert’s XP (on SQLDev.net) you can use his XP to generate GUIDs sequentially then you can use a GUID as your primary and clustering key and you would have an ever-increasing pattern. But doesn’t that violate the principle benefit of a GUID that they aren’t predictable? That’s true, if you’re using the GUID for security reasons then it’s not appropriate, but if you’re using GUIDs because you want to have a globally unique identifier, maybe you’re generating these from two or three different sites and as a result you want to have some form of pattern to it, but patterns for each of the different servers. In SQL Server 2005 they added a new function called NewSequentialID which is a sequential GUID but now you can call a function that does that.
Greg Low: I recently blogged about that but there are a few provisos about that new function. It seems to take account of the MAC address of the network card on the systems. They’re saying from different systems it may be unique but if you didn’t have a network card it wouldn’t. But there would be very few scenarios where you have SQL Server but no network card.
Kimberly Tripp: I don’t think you would have much of a distributed application either, so there’s that argument too if you didn’t have a network card. Interesting scenario. Some people feel like having the MAC address in there is a violation of their “security” but I don’t know if I agree with that. To a certain extent I do, but at the same time I don’t.
Greg Low: The predictability is an interesting issue. The idea that if you insert a few rows you could then guess what the next rows would be if you’re using that sort of function and that may be security related.
Kimberly Tripp: Right. The biggest concern in choosing NewSequentialID is if you want to have a more random pattern, it’s not a good choice. If you want to use that for your primary key then what you might want to do for your clustering key is something else. Even arbitrarily adding an identity column just for clustering may be a better choice to get better insert performance with less fragmentation because all the inserts go to a single location within the table. They don’t cause splitting so that’s a huge benefit.
Greg Low: The key reason for using GUIDs is that you can generate them in another tier before it hits the database. It means that client level code, middle tier code, you can generate the GUIDS, write to the database, and know you’re not going to clash with something already there.
Kimberly Tripp: I understand that mentality. I understand that people are trying to minimize expensive round trips, but there are some fast approaches. You can go to the server, get the ID, insert a place-holder row that doesn’t have all of the data, go back to the client and do whatever you need to do, then come back and update it. As long as you use best practices with that strategy you do make one extra round trip but you can prevent a lot of negative GUID side effects. It’s all tradeoffs. One big problem with place holder rows is if you insert a row with a bunch of nulls then when you update you will have fragmentation due to the increased size of the row. A trick is to pre-allocate your row size on the initial insert when you grab your ID. To do that make sure you’re using default values and as long as you do that you can significantly reduce the update cost, fragmentation, and improve your insert time. The extra round trip does cost but in the end it may be a bigger benefit.
Greg Low: Welcome back from the break. Tell us about your interest in scuba diving.
Kimberly Tripp: Greg, you’re going to be sorry you got me started; it’s almost as bad as SQL. I dove the Barrier Reef in Australia. It’s stunning, giant clams, lots of sea life. The Pacific is amazing. I’ve done a lot of dives and it’s the way I like to relax. There’s nothing that beats a dive vacation: eating, sleeping, and diving for days. Get up, breakfast, dive, relax, dive, lunch, dive, dinner, dive. It’s a little hard core but that’s my personality.
Greg Low: I guess you’re up into hundreds of dives.
Kimberly Tripp: I got certified in ’97, dove a lot until 2002, then working so much I didn’t dive again until 2005 and I’ve taken one more since then. I just logged my 258th dive. Hitting 250 was fun, but there are people who have thousands of dives. It’s a passion but it’s not my job.
Greg Low: I mentioned our local DPE Charles Sterling, do you know Chuck?
Kimberly Tripp: Yeah, I do.
Greg Low: I would have thought so. He’s so passionate about diving. Whenever he’s doing tours I always see if it’s near the coast…
Kimberly Tripp: Yeah, see if he can fit in a day. I’ve only been to Australia once but I made sure to extend the trip, went to the Northern coast and did four days on a live-aboard boat.
Greg Low: One of the best of the lot is up around New Guinea but the security situation and everything else…it’s not nice. In and around the area, the amazing tropical waters, WWII wrecks, and the dive sites are spectacular they tell me. The surrounding problems can be a bit too much.
Kimberly Tripp: The Bikini Atoll where they did nuclear testing, there were less than a few hundred inhabitants and all of them were moved off the islands in the 40’s or 50’s to test bombs. They blew up WWII tankers, freighters, aircraft carriers, and sunk them. Totally uninhabited for fifty years. In the last few years they’ve been allowing divers, but it’s very deep. Tech diving, Trimix diving, usually at 180 ft deep, decompression dives. I haven’t gotten into the tech side.
Greg Low: I spent time at University looking at research theses and the best title I ever heard was “Why Penguins Don’t Explode.” It was fascinating. They used to think that penguins only dove 20 or 30 meters but they actually go down about 200. Totally fascinating how any creature can dive down 200 meters and not implode and conversely how it can come screaming back up to the top and not explode. From memory, I think they made him tone down the title.
Kimberly Tripp: Amazing thing in terms of whales, and I didn’t realize that penguins go so deep. I never dove in the far north or far south where there would be penguins.
Greg Low: Apparently they’re the only creatures that from swimming can throw themselves out of the water and land on their feet.
Kimberly Tripp: Really? Well you learn something new every day! That’s awesome!
Greg Low: He had a friend researching that as well, the biomechanics of that.
Kimberly Tripp: So are these guys employed now?
Greg Low: Don’t know, that was many years ago. I just loved that they were down in the Antarctic doing that. It’s the kind of research that’s not often supported.
Kimberly Tripp: If I could find a career in marine biology and had the motivation to go back to school that would be an interesting area. I admire people who study that. That’s great.
Greg Low: Back to SQL Server, how did SQL Server 2005 change the situation?
Kimberly Tripp: In terms of indexes, the strategies of your base, clustered indexes and even your non-clustered indexes are for the most part the same. The good indexing strategies for 2000 proceed forward into 2005. They’ve added Enterprise Only, the on-line index operations which is a fantastic way of getting rid of fragmentation and rebuilding an index, which is the best way to get rid of fragmentation. There are two ways: you can defrag or you can rebuild. When you rebuild it picks up, moves the table, gives you better clustering of that data so that the pages and extents are closer together. Rebuilding is better than a defrag. But in 2000 rebuilding is only an offline operation where your table has to be offline for the clustered index rebuild. In 2005 Enterprise Edition you can do an online index operation. That’s a huge feature.
Greg Low: It’s adding value to the Enterprise license, in this edition.
Kimberly Tripp: Absolutely. For me and my customers Enterprise Edition of SQL 2005 is definitely a lot more desirable than any other release. In 2000 there were reasons to go for Enterprise, but in 2005 there are a bunch of huge reasons. Just to name a few, clustering--
Greg Low: --number one reason. When I look at the sites that have implemented Enterprise locally it’s invariably because they want a clustered server.
Kimberly Tripp: You can cluster on standard edition but only two nodes, and mostly I see two node clusters. That to me varies depending on the site. If they want more than two nodes they can only use Enterprise Edition. Database mirroring is another. You can do it with the standard edition but its more feature rich in Enterprise. There are some features that are Enterprise only: on-line index operations, partial database availability, database snapshots, on-line piecemeal restore, peer-to-peer replication. Amazingly there are features that have no capabilities, not even available, on any other edition.
Greg Low: Table partitioning?
Kimberly Tripp: Table and index partitioning is Enterprise only. That’s a big one for me too, I’ve written some white papers on that but I didn’t even name that one.
Greg Low: I re-read your white paper just yesterday. I was at site where we’re doing that today.
Kimberly Tripp: Excellent. Hopefully it’s going well. I’m going to put some updated touches on that paper and some lessons learned. When I wrote that white paper everything was in Beta and definitely years since I wrote that. I’m amazed at how much I want to add to that. It’s a great technology, great fun.
Greg Low: Are online operations the most popular in terms of index improvements for 2005?
Kimberly Tripp: Great question. It is really important for the companies trying to stay “always on.” Microsoft’s new focus is the “always on” technologies. If you’re in that arena where you’re trying to keep your database always on or always available, then that’s an important one. You have to rebuild, have to make sure your environment is not fragmented because that negatively impacts performance which impacts locking, blocking, and everything else. It’s a domino effect. Yes, that’s important. The more widespread, new indexing feature that’s available in any edition is the include option. In every release of SQL Server they’ve had a limitation for what we call the B+ tree structure separate from the leaf level of the index structure. Two key components of an index are the tree, for navigation, and the leaf level which holds the data of the index, the index row. The navigational component, because they want it to be scalable, the B+ tree has to be limited to a maximum size. In every release of SQL Server that limitation in the B+tree always led down into the leaf level. That limitation was 16 columns or 900 Bytes, whichever comes first, and your index could not be wider than that.
Greg Low: I don’t think I’ve ever done anything with more than 16 columns in an index.
Kimberly Tripp: I have. In cases with read only systems or systems that are doing a lot of analysis like a relational database warehouse, you want to have many indexes to cover different types of aggregations, different orderings, and different types of analysis structures.
Greg Low: It’s not naturally occurring; you’re creating a covering index for performance.
Kimberly Tripp: Absolutely. Indexes for performance purposes specifically when they cover a query…we should define that. An index which covers a query includes all of the columns that are somewhere in the query, somewhere in the index. Order is not relevant. The columns in the index can be in any order and there can be other columns in the index just so long as every column that is anywhere in the query is somewhere in the index. Then that index covers that query. That’s the definition.
Greg Low: SQL Server can answer the query by just reading the index.
Kimberly Tripp: Absolutely. You end up with exactly that. All of the information the query needs is in the index and the index is naturally smaller than the table so even if you have to scan it, it’s less expensive than a table scan. There are better ways to cover so that you end up with a seekable index. If the query has a where clause, a group by, or a join, those are the more likely candidates to start--the high order elements of the first column. Then the index might be seekable. When it’s seekable you get orders of magnitude faster performance over reading the base table and scanning. In a range query if you have to seek with a partial scan, again you’re scanning a much smaller amount of data than scanning the base table. This almost requires a diagram and demo to make sense of it. Covering is huge; it’s one of the most important performance tuning tactics you can use to tune a query. There is no faster way to get to data, especially in a low selectivity range query, than to cover the query. But covering is expensive. It takes space and there were limitations. So “include” helps cover more queries and you can have a wider leaf level without violating the B+tree structure limitation. The most interesting thing about “include” is when it comes to group by. For example, I want to select the sum of sales by customer over my huge sales table. I’ve got millions of sales and I want the sum of sales by customer. A simple aggregate, grouping by customer and aggregating if there’s a column of the total amount per sale. This might be the header table or line items, doesn’t matter. Sum of sales by customer. That means for SQL Server to best optimize that query if it had all customer information ordered by customer with the sales amount in the index, then it could scan and be able to do a stream aggregate where all the customers’ rows are grouped. It can sum it up for customer 1 and stream it out, go to customer 2 and sum it up, stream it out…it’s a very fast way of aggregating it. The appropriate index would be “Customer, total.” We’d call that a covering index and it’s a stream aggregate because the high order element of the column is your group by. You take your group by then you put in the columns being aggregated. There’s your covering index. Your performance is usually significantly faster than any of the other choices. When you do an insert of another sale for that customer, since the index is ordered by customer and total, it means that the totals are ever-increasing. When they insert another sale for $12.00, let’s say, then they have to insert that order at the $12 spot. If that customer has another sale for $25.00 then they have to go to the $25.00 spot. What is interesting is this index is actually slower and causes more fragmentation for inserts, updates and deletes. In SQL 2005, this is really cool, you can actually create the index on customer, putting customer as the key, or your group by as the key, and you can say include the columns being aggregated. Then that data for the columns being aggregated is not ordered so as inserts and updates come in there is less fragmentation and less maintenance needed. People say include is solely to make wider indexes for more covering queries, but one of the cooler things is it allows you better performance and less fragmentation for indexes where order is not required for certain columns like group by and it gives you better performance. I recommend it for that purpose not just for wider queries.
Greg Low: Outstanding. Can I ask you about index views; do you make much use of those?
Kimberly Tripp: Index views are the precursor to include because index views give you a way to cover more queries. Almost the same thing can be done with an index view as include since you can create a view whose clustered index is up to 16 columns or 900 Bytes but whose view definition is wider. Exact same thing, except index views have more interesting options to them and index views are Enterprise only. I shouldn’t say Enterprise only, since you can use them on any edition but they are less flexible and you have to reference an index hint and you have to reference the view on the non-Enterprise edition. Index views, if you’re going to improve an aggregate, is the next step. A stream aggregate is great, but if I can create an index view I can probably get literally 100 times better performance with an index view that pre-aggregates the data, because in the index view you can have the customer and the sum of sales in the index that would then have the pre-aggregated data. So when you query instead of having to do the aggregate it just has to scan the already aggregated data. Index views are great and there are some great white papers on index views on TechNet and on MSDN. That’s one of my favorite features. I guess I have a lot of favorites.
Greg Low: They always say the quickest way to work something out is to already have the answer. That brings us to time, so what do you have coming up in your world?
Kimberly Tripp: Petting my dog. I’m kidding. It’s early for you and late in the day for me. My mentality this late is a little more punchy than yours. Some exciting things coming up are both Microsoft events as well as my own SQL Skills events. For Microsoft we’ve got TechEd coming up and that’s a huge event. I’m doing a pre-conf there.
Greg Low: Yes, I’ll see you at TechEd.
Kimberly Tripp: Right on. So that’s in Boston where everything is wicked. Wicked cool, wicked this, wicked that. I lived in Boston for Microsoft years ago so that will be fun. There’s TechEd coming up and I’ve got a pre-conf with my friend Brian Randall. We’re going to show how to build Enterprise applications and he’s going to be the tortuous developer and I’m going to be the tortuous DBA.
Greg Low: Yeah I know Brian and he’s a fun guy.
Kimberly Tripp: He is awesome. We like to torment each other from the developer/DBA relationship and just bring together the best practices that often aren’t brought together until it’s too late when it requires rearchitecting and application to really make it more operationally manageable as well as give more performance. That’s a fun one and we’ve only done it once before, at PDC last year, and it was really well received so we updated it for SQL 2005 SP1 and we added and removed some things. There are new products coming out that we’ll be able to talk about by then, so that will be exciting. After that, probably in August, September, and October, my company, SQL Skills, will be offering immersion events which are four and five day events where we do nothing but drill into one topic, like performance tuning. Even with four and five day events I think we could do ten days and still not have enough time. When I do an immersion event they are longer days, fairly intense, lots of materials, deep immersion in a topic. I like to do performance tuning or high availability. My colleague, Bob, will be doing developer immersion events and we’re going to schedule those to start in Seattle, Chicago, and New York, covering the bases in the U.S.
Greg Low: Some people might not realize that Bob Beauchemin works for you, or with you. Bob’s a great guy too. Actually I was one of the tech editors on his new book with Dan Sullivan and it’s just an outstanding book. Tell people when they’re looking for developer topic books for SQL Server 2005 they need to be preordering that one.
Kimberly Tripp: It just got released early so you can order it now. It’s really exciting.
Greg Low: I need to go and get it. Going through the book, the level of insights is just truly wonderful.
Kimberly Tripp: Between Bob and Dan they did a great job. Bob is one these guys I just love in the sense that I’ll say something and I won’t really mean for him to do anything or that I’m looking for something. I’ll say “Oh there’s this feature, blah, blah, blah…” and I’ll forget about it. A day or two later Bob will send me mail, “I completely figured this out and I wrote this code and it’s optimized here, and you need to do this…” I just completely laugh. Well, Bob, I need to subtly hint more often if you’re going to go off on a tangent and figure everything out. So he’s great.
Greg Low: I’m really pleased to see you working together. That’s outstanding.
Kimberly Tripp: We’re going to have a couple of events where Bob does one and I do one but we’ll cross over on a few topics where I make more sense in his immersion event and he makes more sense in mine. Probably have three of those this year. We’ll be announcing those to our website subscribers first and offering discounts. I have a free subscription and I joke that I don’t even have time to spam. Can’t even remember the last time I emailed my subscribers, but they will get first dibs.
Greg Low: There will be a deal. So SQLSkills.com?
Kimberly Tripp: Yup.
Greg Low: Yup. Well, thanks very much Kim and at the latest I’ll see you at TechEd.
Kimberly Tripp: It’s been great, really fun. I always love doing these things but they always scare me. I always hate being recorded. I feel like it’s going to come back to haunt me.
Greg Low: All good, all good.
Kimberly Tripp: Thanks Greg. Cheers.