Paul Randal
SQL Down Under Show 24 - Guest: Paul Randal - Published: 19 Oct 2007
Former SQL Server product group team member Paul Randal discusses DBCC CHECKDB in SQL Server 2005 along with SQL Server 2008 improvements in database mirroring and partitioned tables.
Details About Our Guest
Paul Randal started in the industry in 1994 working for DEC on the VMS system and check-repair tools. In 1999, he moved to Microsoft to work on SQL Service specifically on DBCC. For SQL Server 2000, he concentrated on index fragmentation, writing DBCC endings, defrag, and DBCC Show Contig plus various algorithms in DBCC CHECKDB. During SQL Server 2005 development, Paul was the lead developer- manager of one of the CORDEV teams in the storage engine. Responsible for data access and storage, DBCC allocation indexes, heaps, pages, records, texts, lob storage, snapshot isolation, etc. He also spent several years rerouting DBCC CHECKDB and repair. For SQL Server 2008 Paul managed the project-management team for the core storage engine to become more focused on customer- partner engagement and feature set definition. In 2005, after eight and a half years on the SQL Server team, Paul left Microsoft to join his wife Kimberly Trip, another former guest of the show, running and be showing his passion for presenting and consulting. Paul regularly presents at conferences and user groups around the world on high availability disaster recovery and storage engine internals. His popular blog is at
Show Notes And Links
Show Transcript
Greg Low: Introducing show number 24 with guest Paul Randal.
Our guest today is Paul Randal. Paul started in the industry in 1994 working for DEC on the VMS system and check-repair tools. In 1999, he moved to Microsoft to work on SQL Service specifically on DBCC. For SQL Server 2000, he concentrated on index fragmentation, writing DBCC endings, defrag, and DBCC Show Contig plus various algorithms in DBCC CHECKDB. During SQL Server 2005 development, Paul was the lead developer- manager of one of the CORDEV teams in the storage engine. Responsible for data access and storage, DBCC allocation indexes, heaps, pages, records, texts, lob storage, snapshot isolation, etc. He also spent several years rerouting DBCC CHECKDB and repair. For SQL Server 2008 Paul managed the project-management team for the core storage engine to become more focused on customer- partner engagement and feature set definition. In 2005, after eight and a half years on the SQL Server team, Paul left Microsoft to join his wife Kimberly Trip, another former guest of the show, running and be showing his passion for presenting and consulting. Paul regularly presents at conferences and user groups around the world on high availability disaster recovery and storage engine internals. His popular blog is at So welcome Paul.
Paul Randal: Hey, thanks glad to be here, finally.
Greg Low: Yes finally, listeners not aware, trying to line up time for quite some time. Finally happened. What I normally get people to do is to give us a bit of a background to how they came to get involved in SQL Server in the first place.
Paul Randal: So first part of my bio it said I worked with Deck on the file system and quite long story, but to cut short, deck was bought by Compaq and at that point the file system group was in Scotland, and the main VMS engineering group was in New Hampshire in U.S. so we were expensive group to have around, so Compaq decided since we’re an expensive group, decided to lay off. So it’s not often group of 30 operating system engineers get laid off. Microsoft got wind and sent delegation and to see if anyone wanted to work at Microsoft in Redmond. One of guys interviewed me ended up being first boss at Microsoft, Jeff East, now architect, very interested in me, coming doing stuff on DBCC because of background because equivalent check disk on files. Long story short, few interviews, came to Redmond. Everything looked nice and came to Microsoft and started with SQL Server. I've been there ever since.
Greg Low: Outstanding. Went through bio and give just a little bit about team since been there.
Paul Randal: Very first thing I worked on was bulk load and interesting experience and it seems I didn’t like bulk load stuff since I’ve been sent to work on DBCC. At time thought SQL Server 2000 would be SQL Server 1999. We have great history of slipping SQL Server products. SQL Server 2005 was going to be 2003, then 2004 then ended up being 2005. First thing wrote was DBCC index defrag, which was online alternative to running DBCC index, which at time was only way could get rid of fragmentation, without creating new table and putting data to new table. That was interesting code to write, got to learn more than I wanted to learn of allocation system and locking and logging and how everything works and how had to work around to get index defrag work online. Part of that I revamped show Contig and in 7.0 used to be two ways could run Show Contig, one was Show Contig, the other was FN index info which produced in same way Show Contig did. So in 2000 I rewrote Show Contig to include table result functionally, and to have word fast option, because Show Contig always have shared table log to built run through to figure out fragmentation. Wrote algorithm that takes log. That doesn’t block most queries. You can figure out fragmentation without doing any blocking.
Greg Low: That’s been a big change in the product, I must admit in recent versions has been move to online operations. Seem to finally understand, you can’t take everything offline and block periods of work.
Paul Randal: Sure, that been big emphasis on past releases that got to be (Microsoft) got SQL Server to be enterprise ready and capable. Can’t have database that has to get offline when you need maintenance. Other big thing we did in DBCC 2000 make CHECKDB keeping online. Most algorithms made by Steve Lendal, one of storage leads in engine. I had a hand. Some of algorithms, I spent days figuring out order log records happened in because way CHECKDB works online in 2000 looked on log records while CHECKDB was running and ran recovery on logs inside CHECKDB. Had to check everything that happened while running was looking at weird patterns of log records and figuring out what actually happened so CHECKDB could get transaction consistent view while looking at the data.
Greg Low: Now, I noticed in 2000 you always had possibility of getting false positives…
Paul Randal: Perfect segue, yep. So some of these algorithms aren’t perfect because not everything we needed to figure out what’s going out is logged in certain pathological cases. Certain cases I had to make educated guess and within cases, there are sub-pathological cases where guess was wrong so you’d get a false failures. You’d never get case where missed corruption, but case where there’s corruption when there wasn’t. My wife Kimberly who says “mostly because you’re not smart.” I shall not respond to peanut gallery.
Greg Low: So subtle…
Paul Randal: Yes, that’s Kimberly; she’s a very subtle person. So getting back to topic and no comments to peanut gallery, in 2005 I had pleasure of ripping out code that did log analysis to make CHECKDB run online because we changed to log analysis to database snapshots. Because of false failure issue and without adding log records slowing SQL Server it wouldn’t be possible to get rid of CHECKDB and with all the new feature in 2005, updating codes for all new features would be too much of engineering effort. So we ripped it out, way CHECKDB would work would be take internal database snapshot which provides point-in-time transactional consistent view of database and run CHECKDB on that.
Greg Low: At least it allows you to take advantage somebody else had to do anyway to get snapshots working.
Paul Randal: Right. So that’s major differences between 2000 and 2005; no possibility of false failures. There’s really a corruption if CHECKDB says. One thing that’s possible might be a corruption and you check and it’s gone. That’s because between two times run CHECKDB, somebody dropped a table or rebuilt index and corrupt pages no longer allocated to anything. For people thinking they’ve seen it on 2005, that’s why seen it, because someone changed something between two runs of CHECKDB rather than CHECKDB has a bug in it.
Greg Low: Could you give us indication of what CHECKDB goes through when you execute it?
Paul Randal: Sure. There are many stages; I’ll talk about what 2005 does. It has to look through critical system tables. In 2000 there were three, sys indexes, sys objects and sys columns. In 2005 those don’t exist. There are compatibility views that show what those used to provide but the real are the ones that provide storage engine relational engine methodator about tables’ indexes and columns. There are five of them sys hobbits, sys allocation units, sys hobbit columns, sys row sets, and sys row set columns. Each has clustered index and primitive system table checks, means we look at page of leaf level of indexes and make sure we can read and linkage is correct because that’s our building block CHECKDB uses to use other checks. How can we trust a CHECKDB check if CHECKDB never tells us?
Greg Low: Yeah, if there’s a problem at that level I can imagine things aren’t looking…
Paul Randal: Yeah, people in 2000 was could not read latch, share page, check page, in CHECKDB spots. That’s means critical system has problem in it. CHECKDB can do nothing for that. Repairing one of these pages, way one CHECKDB repairs is to delete pages, so only way is restore backup and suck everything to new database. Next thing it does, does equivalent DBCC Check allocation check and checks consistencies between GAM pages, SGAM pages, PFFs pages, making sure extend isn’t allocated at same time. These are very fast, aren’t many PFFs and allocation bit maps in a database because they only occur for GAMs and SGAMs…
Greg Low: I should get you to define a GAM and SGAMs for people…
Paul Randal: GAM page is a global allocation map, it’s a bitmap page occurs every four GB in every file of database and has one bit for every extents (eight pages that is allocated or not). So it tells if extents is allocated or not. IM page, index allocation map is same as GAM page but one per four GB per file per index, or per allocation units in SQL Server 2005 and whether extent is allocated or not to particular index. One of allocation checks take IM pages that map particular four GB interval in file and check one IM page has bit set so one extent is allocated to particular index. PFFs pages every 64 MB or so and there’s bite per page says allocated or IM page whether how much free space it has or whether it has ghost records, records deleted but not removed from page. SGAM could be called shared GAM page, or short GAM page, and it tracks extents being used by multiple indexes. To avoid confusion, I’ve just said extents only be allocated to one index there’s special extent, mixed extent first eight pages allocated to table are single page allocations, don’t come from dedicated extent and one of these mixed extents have pages allocated to multiple indexes, and when you want to allocate one of these single pages you don’t want to do extensive search to find extent that has free page. So these SGAM pages occur per 64 MB, track extents that have free single pages to be allocated as one of these first eight pages, so that’s these different bitmaps are.
Greg Low: While we’re on that, I’ve wondered about product is the whole concept of having mixed extents, extents are fairly tiny compared to database now do you think there’s value in having mixed extents?
Paul Randal: No I don’t think so at all, and one of the things we thought of in 2005 was should we get rid of mixed extents? But we only had five years to discuss that.
Greg Low: Fair enough. There’s just the whole concept of having piddling amount of space
Paul Randal: It complicates algorithms, complicates defrag, and so at some point they should get rid of it. You could say allocation bitmaps, now that we’re in multiple parabyte-sized database aren’t as efficient as a b-tree based structure to track allocations. Various thoughts changing allocation system, rewriting in future releases, but I don’t know what will happen there.
Greg Low: Yeah it’s a lot of work…
Paul Randal: It’s an awful lot of work and we strive to do small upgrade step as possible when upgrading so downtime is minimized especially if no redundant system you can fail over to, but an upgrade like that when rewriting allocation system, wouldn’t be trivial upgrade step to run.
Greg Low: And you’d want good recovery options on that.
Paul Randal: Yes indeed, that’s definitely been shied away from.
Greg Low: So I interrupted with definitions but I think its good so…
Paul Randal: Back to where we were, so you run allocation check and if everything’s ok, we’ve built layer of trust in building blocks for CHECKDB, we go to running DBCC CHECK table for every table. It runs checks on every page in database, page auditing, checking computed column values, checking linkages between b-tree pages, linkages between tables and non-clustered indexes, interesting algorithms. Every row in table must have one matching row in non-clustered index and every row in non-clustered index must have one matching row in base table. How do you check? Every time come across a row, look up every row, but you’d be taking random IO’s in CHECKDB, so what we do is use bitmap and algorithm works like this: When you come across base table row, you know non-clustered indexes should exist for base table so you can create non-clustered index records that should exist. We hash them into value and match to bit and bitmap and flip bit, so idea is when come later to read non-clustered index records themselves, we hash records to value and hopefully value is same as we generated from the base table row and it flips off again. So every record should cause bit to be flipped twice. Or twice for each non-clustered index that exists. End of checks, bitmap should have no bits flipped. If there are bits flipped, because of lofty algorithm, you can’t tell which records don’t have matching records one is flipped, so you have to go through do rescan of pages, and we call a deep dive. Prior to UCONNs SP2, might have seen CHECKDB run time subtly increase.
Greg Low: This is one that narrates log in entry.
Paul Randal: That’s right. No way to tell deep dive happening, and figure it found some problem. In SP2, new developer for DBCC CHECKDB Ryan Stonecipher, put in error message in error log SP2 you will get error printable that CHECKDB gives indication it’s doing a deep dive and that found issue. Interesting, one of questions, is there way can expose how far CHECKDB is and whether found problems? You can’t expose which table checking, it checks all tables parallel. Algorithms allow it to do IO on pages it checks and doesn’t check table one or two. There is in 2005 progress reporting, that tells you which phase it’s in, like critical system table check, etc. and tells you how far through it is.
Greg Low: This is percentage request in system exact requests.
Paul Randal: Yes that’s right, since it’s complete. Now theoretically, yes, tells when knows found corruption but in 2005 we didn’t get around to doing it.
Greg Low: While on topic of DMVs, one that came up, I’m interested in 2008 material. I like DMV view of non-persisted data of what state of happening in system. I was puzzled they made two new views that replace SP Depends, they made them DMVs, which I though odd seemed view of system catalog instead of dynamic management view and sort of puzzled ended up being DMVs.
Paul Randal: That’s something I don’t know about.
Greg Low: Seemed conceptually odd thing to me but I’ve got couple things of 2008 that puzzled me…
Paul Randal: Hold on, there’s a moment, DMVs are term used to describe two sets of sys thoughts functions, dynamic management function and dynamic management views. DM function is index physical stats, database physical stats, calculates info, and DMVs report on stat info or info that’s been kept in a cache. Could be that they’ve made DMVs because static information and just want to make output composable. That could be reason.
Greg Low: Another one on same vein having discussions is I like SQL Server 2005 move using DDL for everything used to require system procs and I’ve noticed in 2008, they’ve decided some things are core and non-core and whole lot more sys proc of things that could’ve been DDL. I think concerns me if they do that, means don’t get DDL triggers, event notifications, check policies, all sorts of thing, have feeling whether should be big dividing line as to what’s DDL and system stored procs.
Paul Randal: I’d like to see everything DDL for reason you described, and proxy line has mostly been make everything DDL, and there come problems when adding extra work to do DDL for something than store procedure is a lot more work to do DDL and add testing than store procedure, and engineering budgets, are hard and fast, Microsoft has hard problem taking Def team getting feature in without having to rush through implementation, might refuse to do store procs than DDL. Personally, I’d rather see few weeks adding to engineering budgets.
Greg Low: Good example I noticed, consider policy, tables have to be named certain way, nothing to stop me creating it another name and changing name of table. Defeat point of policy, even that put a thing on connect side, I think SPN should be changed to by alter object with name equals… I think I’d love to see all things moved to DDL.
Paul Randal: One of the things I’m sure understand but listeners may not, sheer volume of request get sent in to any of Microsoft products, I have experience with SQL Server, you’d be surprised with hundreds of requests for making changes like you’ve said. Every one is a few weeks of development and testing time and documentation, these things aren’t on trivial, and some have more merit than others. Changing things to all DDL is one I’d like to see because they’re fantastic and powerful feature.
Greg Low: Anyway, talking about CHECKDB things get done. I suppose one of the big differences, people looked to run variety of commands and CHECKDB seems to do everything.
Paul Randal: Big one, there’s misconceptions as to what CHECKDB does. One of things I’ve talked about on CHECKDB on slide what it exactly does. Does mostly checking tables, check file groups at once, one thing it didn’t do in 2000 didn’t DBCC check catalog to check linkage checking higher level database checking between tables. People didn’t know to check CHECKDB or catalog. One of things in 2005 I did, included all check catalog in CHECKDB. Once done all steps, will run all of check catalog and tell if any logical discrepancies between tables.
Greg Low: One of other questions is how often recommend DBCC CHECKDB?
Paul Randal: It depends on so many different things. Every situation is different. Depends on service level agreement, stable IO system is, backup strategy, main thing is you have to run it. Can’t say my IO system is stable and trust SQL Server, so never run it. Sticking head in sand!
Greg Low: I’ve seen a lot of sights who do that I might add over many years…
Paul Randal: They just don’t do anything.
Greg Low: Yeah.
Paul Randal: Oh yeah, absolutely I’ve seen plenty as well. A lot nowadays because database are so large, running CHECKDB is long, runs a lot of CPU and IO overhead can’t complete CHECKDB in window or don’t have window. Becomes not how often run CHECKDB but how can you run consistency checks. Helped customers implementing options apart from CHECKDB. I’ll run through some if you want. First one is don’t run consistency check, but not good option. Second is kind of break up consistency checks. Can break up equivalencies by taking ten night period and couple hundred tables on night one, run DBCC on first ten night two DBCC on next ten and so on and so on, and run equivalent of CHECKDB in ten days. Another way to do it is use partitionary. You can use DBCC partitions, especially if you’ve got read-only partitions with not-so-critical data and one read-write partition, where every night, run DBCC check file on read-write partition and once week on read-only ones. Also where offload check workload to another machine. A good DBA is taking regular full backups of database and one thing should do is validate backup, Kimberly and I say is “Don’t have backup ‘till restored it.”
Greg Low: Indeed. Usually getting it wrong, is a career limiting move.
Paul Randal: Absolutely yes. You’ve got to validate back up the database somewhere so why not take another system, restore it and run CHECKDB on that database. Problem on CHECKDB, and have corruptions, don’t know if it’s restored database or backup or production database so forced to recheck production database run consistency check.
Greg Low: At least if it does work you know both have to be ok…
Paul Randal: Right, yeah. So there are methods getting around full-check database. Other one use physical only. Nowadays, not worried CHECKDB causing corruptions. Things going wrong IO sub-systems. Turn on page check sums make sure running physical only, that does system table check, and check allocations, and run through allocated page and page audit, not expensive logical check. IO band rather than CPU process. A bunch of methods you can use rather than no consistency checks.
Greg Low: Paul what I usually do with everyone, is ask where you live, what you’re doing, sports, hobbies, passions?
Paul Randal: One of things I blog about, Kimberly away couple weeks in India, teaching.
Greg Low: For those who haven’t caught news, the fact it was wedding of the year earlier this year.
Paul Randal: We got married in July which is very cool.
Greg Low: Sounded like a great event had Richard and Cowell commenting on .NET Rocks! About how good it was.
Paul Randal: Actually recorded session of .NET Rocks! And drunken interviews and barbeque we had with Carl and Richard and assured me tapes will never see light of day. But yeah, that was very cool. She was away couple weeks, one of things I like to do, make models, making Lego models recently, they have some really good adult-oriented Lego models and one I just got and spent two weeks making while she was away was a millennium falcon from Star Wars. Put out biggest Lego set ever.
Greg Low: I know the one you’re talking about, my father-in-law is mad Lego guy and has same one.
Paul Randal: Right, yeah I’ve been a Lego freak since kid and got it, it arrived day before Kimberly left and spent 24 hours probably two weeks she was away on it. Absolutely stunning. Little Lego figures you get meant to be modeled same scale, I’m actually going to take some photos and blog it today.
Greg Low: We’ll look on the blog, that’s great.
Paul Randal: Yeah... We like to dive.
Greg Low: Yeah you’re into diving as well, we’ve talked to Kimberly over the years…
Paul Randal: I always wanted to get into scuba diving I’d never met anyone into it when I met Kimberly, perfect opportunity to learn. Year ago did Patty O diving certification in Puget Sound, we live in Redmond, tons of water, diving. Last Christmas and new year went out to Indonesia to Wakatobi where there’s fantastic diving, Kimberly went out few years before me on a live aboard which is don’t get off boat apart from diving and went around Indonesian islands including Komodo and got to go see the komodo dragons up close. Unfortunately we’ve been so busy this year no dive trip in and haven’t been diving in 2008.
Greg Low: Tropical waters are so clear and warm.
Paul Randal: They’re very clear 100-200 feet of visibility under water. Problem with Indonesian waters apart from clarity, charge sewage straight into the sea, and if happen swallow water, few weeks later, nasty intestinal problems for three weeks, so that’s one of the down sights.
Greg Low: I think number of places found I had a bit of time I did cook-up, fishing village in Malaysia year or two back invited us for fish lunch and couldn’t do that after watching water in area.
Paul Randal: Didn’t know this till after I been there, hopefully when I go back and do more diving, keep my mouth completely closed.
Greg Low: Been doing a bunch of work on 2008 until point you left recently.
Paul Randal: Yeah very interesting things happening in 2008, be happy to tell you most interesting things if you want. My background is in corruption and data recovery. Cool thing in database mirroring, automatically repair corrupt pages. Database mirroring partnership, principle and database exact copies, so feature works by IO error on page in principle database, database mirroring notice been error and go over to mirror and request exact page and if not corrupt, sucks page over and replaces page on principle. Even cooler, same happens in reverse. If log being redone in mirror and comes across corrupt, go to principle and get page and put it over to mirror. Although it doing this cool thing, not replacement for noticing corruption taking action make sure doesn’t happen again. Reducing downtime, applying band-aid while figure out what’s going on.
Greg Low: Good point. Spend long time at HP in engineering in the 80’s even then thing amazed me, people had had things on highest level and thought could fix things, but in end, never understand something wrong if that starts happening in first place.
Paul Randal: In presentations on CHECKDB, always have page on bad advice. I see on forums people saying restore backup run repair and carry on. No one says root cause analysis and figure out what happened
Greg Low: Yeah. Sub-systems retaining bad data don’t get better by themselves.
Paul Randal: Next time, won’t be lucky to run repair and might be completely hosed. Tool I’ll plugged SQL IO SIM. SQL team put out in released in 2006 I believe.
Greg Low: Stress the system…
Paul Randal: Replacement for old IO stress crank up IO’s more than SQL Server can, more than IO sub-system benders tool will, and tell coming across corruptions reads or stuff like that. Free, and information on various sights, PFFs engineers blog, interpret results and what’s its saying. Ever having corruption, download and run tool, and help tell where there’s problem.
Greg Low: One question with automatic page recovery, in sessions, Don V. showing earlier in year, executing command and command fail, and get 824 back sometime later execute later command work fine, thing wasn’t clear whether indication to client potentially page might come back okay later.
Paul Randal: Okay so, answer is no. Run query and get 824, which says page check problem query will fail and repeatedly fail with IO error until page is repaired. So at level that happens, query happens, doesn’t know mirroring running, automatic page repair might kick in and fix page, so only error can give is it hit IO error, it’s not in app logic to do retry.
Greg Low: That’s the trouble, I put entry on connect side about this. Even if it doesn’t know if there’s chance page comes back, almost needs some way client knows in mirror, is synchronized, some chance there’s retry, not just retrying for sake of it.
Paul Randal: Sure. There is way code that up in application, you could look at the mirroring DMV.
Greg Low: Ah yes, system mirroring end points one of those.
Paul Randal: Look and see database is mirrored and trigger more intelligent retry on application, but my guess is that odds that application developers doing that is zero.
Greg Low: Never know, sometimes, bit like retrying deadlocks in old days, find same, getting people to recode, if they embedded the code low level, not that big deal, because hopefully common piece of code gets reused umpteen times.
Paul Randal: Trick is educating people and one thing probably aware of and lamented at Microsoft. Books online, fantastic, but bang out features, Microsoft does, and tells how to use features, but doesn’t give peripheral information around programming around features, combining, coping with failures, etc.
Greg Low: Prescriptive guidance is real issue. I find at moment, and no doubt separate show discussion link language contract, against sync, thing finding frustrating examples tend to fit into five to seven minute demo, but very short of appropriate prescriptive guidance.
Paul Randal: Right. Yep, you’re absolutely right.
Greg Low: There’s no obvious “Yes that what we do for a living...” Apart from mirroring also compression?
Paul Randal: Yeah, so, one of problems with mirroring active database, often log sent over communications. Link between principle and mirror, and if link constrained, can lead in synchronous mode to delays in transactions committing, because it has to wait for link in mirror, or in asynchronous building up on mirror where you lose bunch of data. In 2008 done log stream compression, propriety algorithm implanted and some testing, achieved five times log compressions based on compared with no compression at all. Problem takes CPU to do it. Do get extra CPU load, so if system pegged, might not use compression, but on by default running 2008 but trace flag to turn off.
Greg Low: See very few SQL Server boxes over CPU bound, there’s odd one but it’ll be causing recompiles all the times apart from that most seemed to be disk bound, even single CPU enough people throwing three quarters prop boxes, barely see heart beat.
Paul Randal: One of reasons, excited for log stream compression, get questions about mirror database but mirroring needs full recovery mode, all operations are logged, including doing index rebuild. If mirroring on system, can’t do efficient index maintenance cause I can’t do log recovery mode. Because log stream compressions compress log records in index rebuilds, may allow depending on data format enough compression it allows index maintenance that gets rebuilt. Pretty cool…
Greg Low: While we’re on compression, should mention page compression, is that area of local knowledge? With changes of 2008?
Paul Randal: Know pretty amount of that. Various things compression does. Page formant while doing compression is changed so every column stored as variable link value, so no concept of fixed link on record anymore. Different types compression: First compress column types, instance, like bar decimal in SP2 in 2005, imagine integer column stored as four bytes, fixed length in 2005 and before, storing value zero or one, don’t need four bytes to store value, can compress to couple bits. In 2008 compression changes row format such that all columns values can be stored as variable link columns so all represented. Its value can be compressed than less than data type width, than it will be. That’s represented efficiently in row format. Other thing is page level compression, if have records had repetitive column values, common in some applications such as SAP, where might have column value has country value or data value…
Greg Low: I see commonly in audit tables…
Paul Randal: Right.
Greg Low: They would lend themselves really well, where have certain use being endlessly repeated through rows.
Paul Randal: What Microsoft does then is it will create dictionary record on page and extracts common values and replaces common values in data or index records with token and token lookup into dictionary record. Achieve space saving this way. Problem trade-off space saving and CPU usage. Pages stored in their compressed format in buffer pool, not until something in access method retrieves page or row that page is effectively decompressed.
Greg Low: Upside is fit far more rows in buffer pool.
Paul Randal: Absolutely, reason compression is there, it used to be big problem with migration of database from oracle or IBM or SQL Server because there’s no compressions SQL Server size of database suddenly bloat. Big turn off for migration.
Even though saving money on software and maintenance, you’re actually paying more for storage. So that’s a big boon. From what seen, talking to customers, at Microsoft, hotly anticipated, is data compression.
Greg Low: I ask customers hotly anticipated, date and time data types…
Paul Randal: Is that stuff cut out of 2005?
Greg Low: Yes, and I think people long waiting for 2005 so when I’ve been going around country lookout 2008 sessions, and separate date and type data types woo who moments in session. Other mention in 2008 dealing with petition tables?
Paul Randal: Yes, so there’s change parallel plans work in partitioning. In 2005, way parallel plan work one thread allocated to each partition to process partition in terms of scan with search predicates, if situation where partition larger, then threads on smaller complete, and run time crew be banded by thread running thru large partition. Data skew basically could screw up plans. In 2008, Microsoft added option where can switch parallel plans work instead threads work all threads process partitions in sequence. Say eight-way box, running eight threads, thread first partition, process little chunk, as soon as thread finished, move to next partition. Moved from one-one to many-to-one model, if got data skew situation, get to large partition and all threads processing partition.
Greg Low: I suppose while we’re on that one of the discussions in performance groups to do with setting max degree of parallelism maxed up? Guidance for people whether should change?
Paul Randal: Umm okay issue I hear of is predictability. Know in instance in SAP recommends anyone running SAP sets max DOP to one.
Greg Low: Single process single thread…
Paul Randal: Never get parallelism and degree predictability. As with any, what’s advice? I can’t give right advice. What I’d say is when testing application, test with production work load, which many don’t, and see which way works for you. That’s the best I can do…
Greg Low: Thought it was interesting had SQL Down Under last weekend.
Paul Randal: Talking to Brian Matson who was…
Greg Low: Excellent. Brian hadn’t managed to get along previous year, so pleased, so must’ve enjoyed it?
Paul Randal: Yes he did very much so.
Greg Low: Great weekend, mentioned it Kevin Cline there as well…
Paul Randal: Oh Kevin cool…
Greg Low: Kevin situations recommended kill off parallelism and set it to one. Wondered if that was a quest related experience and wondered to do with things like SFP large systems I’ve seen recommendation.
Paul Randal: Down to predictability. With single procs, single thread, you have predictability. With ability of processes, parallelize on the fly with resources available to it, don’t get predictability. SLA says query reply in milliseconds. Only way to guarantee is to go to certain max for parallelism.
Greg Low: Always amazed in .NET I try to do any moldy threading code processes is just so hard anyway and must be quite a handful have parallel execution plans. Have great respect for who wrote a lot of those…
Paul Randal: Seems all developers have PhD’s in particular area of processes, very smart bunch…
Greg Low: Remember couple years ago Jim Grey, moldy processing, describe three phases people went through, phase looked like voodoo/magic, other phase think you understand and get confident, third phase where you get wise. Find out all things didn’t think or follow.
Paul Randal: Very true, in any program absolutely.
Greg Low: Thanks for that, I suppose other thing need to know, where are we going to see you, what’s happening?
Paul Randal: Next two things coming up for us are SQL Connections in Las Vegas, in three weeks, couple pre con sessions, post con, then flying to Barcelona, TechEd at IT forum, think Jeff Warton be there proctor some labs…
Greg Low: Jeff organized Canberra SQL group was there on weekend helping me out with camp it was fabulous.
Paul Randal: After that, guess next conference doing is probably Connections in spring. In Orlando, put out call for abstracts for conference, might be co-chair assistant at conference with Kimberly. Pretty cool. In terms getting down to Australia…
Greg Low: Yes that would be good. Diving would be good.
Paul Randal: Very earliest down there is after summer next year. We don’t have any concrete plans but love to. Keep saying we’ll be down there. Maybe late summer next fall next year might get down there.
Greg Low: Excellent.
Paul Randal: Yeah its crazy busy as you well know you know…
Greg Low: Indeed, well listen thank you so very much for time, Paul, been pleasure having you on show!
Paul Randal: Absolutely thanks for inviting me, sorry took so long to get us hooked up…
Greg Low: No not at all, that’s great.
Phone: 1300 SQL SQL (1300 775 775) l International +61 1300 775 775 l Fax: +61 3 8676-4913
Copyright 2017 by SQL Down Under | Terms Of Use | Privacy Statement