# This is Columnstore

This weekend I finally spent some time getting up to speed on the new xVelocity Columnstore index feature in SQL Server 2012 Enterprise Edition.

This is part one of three blog articles about my initial experiences of them, but more interestingly, a quick look under the hood at how they work.

What are columnstore indexes?
In summary, the Columnstore indexes feature is both a type of non-clustered index AND a new query processing mode built into the SQL Server 2012 database engine. Their aim is to massively increase (approx. 10x) the performance of the types of queries used within a data warehouse, or in simpler terms – querying very large tables as quickly as possible.

There are many explanations of what columnstore indexes actually are, how they’re different to row-based indexes, and how SQL Server’s implementation works so I’ll point you to perhaps the best links for these topics:

• How do Column Stores work? – Thomas Kejser here
• Inside SQL Server’s implementation – Conor Cunningham here

How would I use them?
Hopefully, the two links above will give you an idea as to what the feature was designed to do and the circumstances in which you might use them. The next section focuses on how you would use them and for this series of blog articles I’m going to use the two simple tables shown below:

Having created and populated the tables, I used random number based values to create 35 million rows in the fact table for 10 products, I then created regular clustered indexes on them:

create table tblFact (rowID bigint identity(1,1),
custID int, prodID int, Sale money);
create table DimProd (prodID int,prodName varchar(200));
create unique clustered index idx_cls_tblFact on tblFact(rowID);
create unique clustered index idx_cls_Prod on DimProd(prodID);
insert into DimProd (prodID, prodName)
values (1, 'Car'), (2, 'Van'), (3, 'Truck'),
(4, 'Taxi'), (5, 'Horse'), (6, 'Helicopter'),
(7, 'BizJet'),(8, 'JumboJet'),(9, 'Ferry'),
(10, 'Hovercraft')
insert into tblFact
(custID, prodID, Sale)
(select convert(int,rand()*10), convert(int,rand()*100),
convert(money,rand()*10000))
go 1000000 

Followed by:

insert into tblFact (custID, prodID, Sale)
(select top 500000 custID, prodID, Sale from tblFact)
go 58

This will give you 30 million rows for about 1GB of disk space.

I then ran a simple query against the table that’s designed to simulate a simple star schema and the type of query a data warehouse might use:

select DimProd.prodName,
sum(tblFact.Sale)
from tblFact, DimProd
where tblFact.prodID = DimProd.prodID
group by DimProd.prodName 

I could have added supporting non-clustered indexes, but traditional index design isn’t the point of this article, but with the 35 million rows I had in my fact table the query takes 4 seconds to run.

The next step is to actually add a columnstore index, this is quite a big step as:

• You can only have 1 columnstore index per table
• Creating the columnstore index makes the table read-only

The following command adds a columnstore index:

create columnstore index idx_cs_tblFact on tblFact(custID, prodID, Sale)

You can tell from the time it takes to create, on my SSD drive about 50 seconds, that there’s a lot of work going on to create it; for comparison, creating the equivalent regular non-clustered index takes about half the time.

If I then execute the query used above again, the query optimizer will use the columnstore index AND the new batch query processing mode that Conor mentioned, and the query will now execute in less than a second.

Although the performance improvements in our example might not seem great, 4 seconds down to 1 second, in further tests I’ve increased the fact table from 30 million to 90 million rows, yet didn’t seen a 3x increase in query runtime.

Sys.indexes
Finally, for this article, you can begin to see how columnstore indexes are both different and similar to normal indexes by looking at their properties in SSMS and sys.indexes with this query:

select * from sys.indexes where type = 6

Books OnLine lists Index type 6 books as a “Nonclustered columnstore index”, while Index type 5 interestingly is “Clustered xVelocity memory optimized columnstore index (Reserved for future use.)” This suggests there might be quite a lot more than just updatable columnstore indexes on the roadmap.

Of course, this is all just my first look at columnstore indexes, if you know something I’ve not mentioned, please feel free to comment on the post!

# Efficient, partial, point-in-time database restores

This article is about a situation that many of us could describe the theoretical approach to solving, but then struggle to understand why SQL Server wasn’t following that theoretical approach when you tried it for real.

• a partial database restore, 1 of 1300 filegroups;
• to a specific point in time;
• using a differential backup, and therefore;
• without restoring each transaction log backup taken since the full backup.
•

The last point might sound un-necessary because you’re restoring a differential backup, but the restore script originally being used meant SQL Server still wanted every transaction log since the full backup restored.  This article explains the background to the situation, the successful restore commands, and identifies what was causing every transaction log to need to be restored.

For this article, let’s imagine we have a database of the following configuration:

And, let’s imagine it has the following backup schedule:

Then, let’s assume we want to restore fgTwo to the point in time labelled above in order to recover data from a table it stores.  Performing a partial database restore which will be required also requires the primary filegroup to be restored so SQL Server will automatically restore it making the future database look like the following:

You’d expect the path to restore fgTwo to look like the path on the left of the diagram below, but for some reason the client was being forced to perform the restore steps on the right, despite their T-SQL restore commands appearing to follow the syntax in Books Online.

To show how to restore the filegroup using the path on the left, and highlight what was causing the path on the right to be required, I’ll use a series of T-SQL restore commands.

This first command restores the parts of the database we’re interested in from the full backup file, labelled “A” in our timeline, and the two most important parameters shown in green are what turns a complete database restore into a partial filegroup restore.

restore
database partial2
filegroup=’primary’, filegroup=’fgTwo’
from disk = ‘c:\parttest\partial1_FULL_A.bak’
with norecovery, partial, replace,
move ‘primary’ to ‘c:\parttest\priamry.mdf’,
move ‘primary_log’ to ‘c:\parttest\primary_log.ldf’,
move ‘fTwo’ to ‘c:\parttest\fTwo.ndf’

The next command restores from the differential backup, labelled “B” in our timeline, and means we don’t need to restore transaction logs A1 and A2.  In the client’s real-world scenario they actually had 120+ transaction logs between the full and differential backups, hence the desire not to have to restore each of them.

It was this command that was the cause of them having to restore every transaction log since the full backup, the crossed out parameters in purple were what was used and causing it to happen.  Telling SQL Server again which filegroups to restore was forcing it to need to restore the entire log chain since the full backup.

restore
database partial2
filegroup=’fgTwo’
from disk = ‘c:\parttest\partial1_DIFF_B.bak’
with norecovery,
move ‘primary’ to ‘c:\parttest\priamry.mdf’,
move ‘primary_log’ to ‘c:\parttest\primary_log.ldf’,
move ‘fTwo’ to ‘c:\parttest\fTwo.ndf’

Finally, with that step performed, you can then restore the subsequent transaction logs from after the differential backup with a relevant STOPAT parameter:

restore
log partial2
from disk = ‘c:\parttest\partial1_Log_B1.trn’
with norecovery, stopat=’2012-05-29 15:13:12.750′

restore
log partial2
from disk = ‘c:\parttest\partial1_Log_B2.trn’
with norecovery, stopat=’2012-05-29 15:13:12.750′

restore database partial2 with recovery

At this point, the database is recovered with just the primary and fgTwo filegroups online, you can look in sys.master_files to see the state of all of the database’s data files and which are currently online.

In summary, this article showed how to perform a partial database restore, and how you can easily have to restore more than you were expecting to due to a simple “over-clarification” in a restore command.

I have a complete demo script of a more thorough test available for download from here.

# Read Committed Snapshot Isolation– Two Considerations

The Read Committed Snapshot database option in SQL Server, known perhaps more accurately as Read Committed Snapshot Isolation or RCSI, can be enabled to help readers from blocking writers and writers from blocking readers.  However, enabling it can cause two issues with the tempdb database which are often overlooked.

One can slow down queries, the other can cause queries to fail.

## Overview of RCSI

Enabling the option changes the behaviour of the default SQL Server isolation level, read committed.  By default, this isolation level means that if one session is updating a range of rows then another session cannot execute a select query against the same set of rows; the shared locks the select query needs are incompatible with the eXclusive or Update locks that the update query will be using; the session executing the select query ends up being blocked until the transaction involving the update query has committed.  That’s a very high level overview of what happens so those wanting to know more or become an MCM should watch this video here.

Configuring a database to use the Read Committed Snapshot option aims to stop this happening by using Row Versioning.  Essentially, before a row is locked in preparation for changes a copy of it is put in a special part of the tempdb database, known as the version store.  When a select query then wants to read that row it uses the copy in the version store rather than waiting for the actual row to have its locks released.  Again, I recommend watching the video linked to above to see just what happens as I’m just giving a high level overview of the feature before I discuss the primary points in this article.

## Tempdb Performance

One of the first things people mention when you suggest enabling the Read Committed Snapshot option is that it will put pressure on the tempdb database which stores the version store.

That’s correct it will, but a properly configured tempdb database should never be a bottleneck in your environment.  You can read how super-tuning expert Thomas Kejser put the world’s biggest database workloads through SQL Server and never had to worry about tempdb performance here.

There are two considerations I recommend for configuring your tempdb database in order to ensure good performance:

• The number of tempdb data files – in the absence of your own monitoring for PFS, GAM or SGAM contention within tempdb I hear people at Microsoft these days recommending having 8 equally sized data files which I’d agree with.  They don’t need to each be massive, their total size could still be 8MB(!), but having more increases the number of internal system objects within the database so reduces the changes a single system objects becoming contended.  (The rule in SQL Server generally tends to be that if you have any kind of resource contention add more of that resource; whether it be IOPS, locks or PFS pages etc.).  Having too many data files is much less likely to cause you problems than having too few, just be sensible about how many you have.

• The type of physical storage for the tempdb data files – as always, configure the storage you use for tempdb like you would for your database files.  Fast, reliable and striped arrays, and while Thomas will tell you that using dedicated disks isn’t important for him, others will tell you its crucial.  The only additional comment I have is that some people deploy NAND storage specifically for tempdb, in fact SQL Server 2012 had some changes to support this with failover clusters.

Crucially though, having a poor performing tempdb database won’t break a user’s queries, they may just run slower than they could do, especially if you’re using the read committed snapshot option.

## Tempdb Sizing

How much space your tempdb database needs depends on how you use SQL Server, some people need very little space whiles others need lots.  What’s relevant to this article is that the read committed snapshot option doesn’t just like tempdb space, it needs it.

We mentioned how rows get written to the version store in tempdb as they get locked in the source user database, but what happens if there’s no free space in tempdb when that process goes to happen?  I did a test to find out, you can email me for my demo script if you want to try it for yourself.

In summary, when tempdb is full update queries will still execute successfully but select queries which expect to find the rows they need in the version store will fail when they can’t find them there.  You’re likely to see this message:

Msg 3958, Level 16, State 1, Line 1

Transaction aborted when accessing versioned row in table ‘dbo.t1′ in database ‘s1′. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning.

So how do you stop that from happening?

The safety option I suggest is to place tempdb on a drive with lots of free space although its acceptable to configure tempdb to start with small initial data file sizes.  However, make sure you configure the date file auto-grow settings so that when tempdb runs out of free space SQL Server will automatically allocate it some more.  There are methodologies you can use for setting a good initial size for tempdb but for now enabling the autogrow option is a good failsafe.  Then, after a couple of months see what size the data files have grown to and consider setting that to be their initial sizes.

## General Recommendations, Test

Those are very general recommendations, along with setting the data file auto grow size to 500MB, but in the absence of specific testing or advice for your environment they might be something you want to try using in a test environment to help you learn more about tempdb behaviour in your environment.

# Knowing when you’re ready to attempt to become an MCM of SQL Server

Before we start, I should point out a couple of things: this article has been reviewed by the former MCM program manager Robert Davies so don’t think it gives away secret information about how to prepare – none of that is covered by any NDA anyway, and also realise that the MCM is a very high level of certification, its not for everyone.  Read the experience the program assumes you have and ask yourself if that’s like your background.

## How I Prepared

There were three specific steps I took for in my preparation and luckily I was able to make most of them part of my day to day role at Coeo. They were:

1. Real-world experience

Equally as importantly, you need to find a way to make the intense preparation more exciting to you than just sitting down in an evening and reading a book. If you take that approach you’ll run out of mental energy and begin cutting corners in your preparation which will reduce your knowledge and familiarity with features. I introduce the word familiarity there for the first time, that’s an important word for the MCM’s requirements of you.

## Familiarity

This certification isn’t about knowing every syntax, command or trace flag ever published, it’s about knowing what features SQL Server has, how they work, how they work with each other and their limitations. No book will teach you all of that, only having used them a lot will you become familiar with them. For example, if the lab exam asks you to meet requirement X then you should immediately know that feature Y is the answer, then is not the time to be teaching yourself the difference between two similar features. Of course, if you also know there’s something which can stop a straightforward implementation of feature Y then your familiarity with it needs to be able to tell you in advance what that issue will be – and how to fix it.

## Real-world Experience

Nothing can prepare you better for the MCM exams than real-world experience of having used SQL Server, that’s where your familiarity comes from. Of course, that probably doesn’t mean managing your company’s HR or Intranet databases as the MCM is focused at enterprise environments with requirements and issues which require MCM level experts to resolve them. I was lucky that my day job put me in those enterprise situations almost every day and while you may not think you have that luxury you’d probably be surprised.

My advice is to do the following regardless of what your involvement with SQL Server is:

• become familiar with the MCM’s exam objectives and your understanding of what they require you to know (the readiness videos may help if you’re not sure) in terms of features and best practices, they are available here
• generalise each SQL Server request or requirement you get given in your current role, remove what makes it specific to your environment and most importantly, the limitations your current environment might put on how you would deliver it
• then, in your own time, consider how you might meet that requirement in a way that best shows your understanding of the MCM’s exam objectives. For example, using peer-to-peer replication might not be possible in your environment but it could be the most appropriate way an MCM would implement a solution in an environment with no legacy restrictions
• with your chosen approach planned, implement it on some virtual machines. You don’t need to worry about having complicated database schemas or large volumes of data, nor do your virtual machines have to be fast. You should be able to create the databases and data for your tests in 60 seconds, keep them simple so you can focus on getting the feature to work so you can learn about it. Spending a whole evening just trying to replicate all of the tables in your intranet database is an un-necessary way to waste time
• with the feature installed, remove it and repeat the setup, determine if its best to use the GUI or T-SQL setup steps in case there are options you can’t chose through the GUI, then find out how you troubleshoot and diagnose that feature, are there DMVs or log files etc.

That may seem like a lot of effort but that’s the level of preparation I put into learning about SQL Server for my exams. Of course, there will be some features you might know more than enough about already, great, tick those off and move onto those you don’t know so well.

The MCM web site gives you a very thorough reading list, here, which covers almost every part of SQL Server in almost every depth you can imagine. How much of that reading list you can ever read is a very personal decision based on time, ability and ultimately your interest in some of the subjects. What I will say is that every little thing you know about SQL Server when you go into the exams will boost your confidence, especially if you find you know more than is required. Some of the books can be expensive and difficult to get hold of so don’t feel they’re compulsory reading. Instead, Books Online and blogs can give you the equivalent knowledge but you will have to spend more time searching for it. I’m not going to recommend any specific blogs, only that the types of blog you read should typically be authored by people who teach pre-cons or deliver level 500 training at conferences.

When you do read a book, make sure you have a good note taking method. There’s no point in reading a book, getting to the end and forgetting what you read 2 days ago at the beginning. I used OneNote but there are many variants of the productivity tool around.

I found the videos, available here, very useful but not in the way you’re probably imagining.
I watched them at anywhere between 1.2 and 1.6x their normal speed depending on who was speaking. Why? Because I used them for revision and checking I knew what was being shown, not because I was using them as my only learning opportunity or because I wanted to emulate the classroom experience. If I didn’t know something that was being shown I either stopped the video and researched that subject, or watched that part at normal speed. However, you cannot pass the MCM exams just by watching 40 hours of videos! They are readiness videos, not a complete training course.

That was “all” I did for my preparation. Notice I didn’t call it revision, as I was honest and had to learn new things for the MCM exams, you will need to as well. I also didn’t do any MCM training courses, practice labs, webinars or use any discussion groups etc. However, watching the #sqlmcm hash tag on Twitter though may be of interest.

If someone asked if you’re ready to take the exams you’d always say you need another month or 2 months. In fact, the more you prepare the more you realise you don’t know so the more time you feel you need, that’s just because you’re aiming for a perfection that’ll never exist. Below are three recommendations my mentor, an MCM, gave me to help determine if I was ready and prepared:

• if your preparation plan is going to be more than 6 months you’re not ready to even begin. You can’t sustain the preparation intensity for any longer than that
• you should be able to confidently talk about a feature of SQL Server to someone else who works with the product for 90 seconds and not end wondering if the other person believed what you were saying because everything you said should have been true and something you’d learnt
• you should be able to take all four of the pre-req exams needed for the MCM the day before you take your first MCM exam and not think anything of it. If passing an MCITPro exam is still a big hurdle for you, you’ll need to re-consider your approach to the MCM exams

So, that’s my advice, it’s not right, it’s not wrong, it’s just my approach that I used for taking, and passing first time, the MCM exams. Good luck with yours!

# What to do when the solution will never be in front of you

Sometimes you have a requirement which you don’t have a tool in your toolbox for.  When that happens the solution may not be to just get another tool and add it to your existing toolbox, it might be you have to use an entirely new type of toolbox with tools you’ve never used before in it.  Doing that maybe a bold architectural decision but is often where the biggest performance or functionality gains can come from.

Recently, in the SQL Server space there have been some technologies become available that showed how existing performance bottlenecks were removed by solving the problem in a completely new way to the previous successful but relatively temporary fixes.  Two examples we’ll consider are solid-state storage and Columnstore indexes.

## 10x faster can’t always come from 10x the power

Tonight I read the keynote presentation that Conor Cunningham gave at SQLBits recently about ColumnStore indexes (available here) but rather than read about how ColumnStore indexes work I read it from an architectural decision making perspective.

Midway through the presentation is the line “Making something 10x faster is HARD”, followed by a slide questioning how could you make a query run 10x faster.  The answer wasn’t optimising the existing way of doing things, it was to implement a completely new way of working, in this case a new query optimiser mode and index format.  The existing options just couldn’t be tuned anymore to meet the requirement.

## Game changing storage

The other example I quoted was the introduction of solid state storage in database servers.  Making storage fast was always a goal of infrastructure teams, they used different RAID levels, different caches and different numbers of disks until the industry realised that 5 and 10% gains weren’t always worth the effort or cost of implementing.  Instead, what we wanted was something that was 10000% faster and the answer wasn’t more disks, it was a totally different approach to persistently storing data, in this case using memory chips instead of spinning platters.

## Conclusion

You’re probably never going to be asked to re-write a query optimiser or create a new type of storage hardware, however you maybe asked to keep making something go faster or scale further after having just done all you can do to make it faster.  It could be making reports run quicker or supporting more concurrent users.  My conclusion is that you shouldn’t just think about how to solve today’s problem using today’s options, think about a medium term option that may mean doing something radical but it won’t be a shock to you when you realise you need to do it.

# Microsoft Certifications – a newcomer’s guide

At the recent SQLBits X conference I was invited onto the panel during a session about Microsoft certifications and I said I’d write a follow-up article about some of what we discussed.  While one of the session’s aim was to start communicating the changes the SQL Server 2012 certifications are bringing, most of the questions were about Microsoft certifications in general, although there was a disproportionately high interest in the Microsoft Certified Master.  I’ll cover MCM topics in a separate post.

These are just my views and the advice I give people who ask me similar questions, they may differ from yours, they may not.

## Which level of certification should you aim for?

For the SQL Server database engine, there are four levels of certification, the BI track will soon have a similar number, and at the same time all of the certification names will change but for now I’m going to use the SQL Server 2008 era titles and list them in increasing order of “difficulty”.  They are: Technical Specialist (MCTS), IT Professional (MCITPro), Certified Master (MCM) and Certified Architect (MCA).

Which level is right for you depends on your personal goals.  If you have only just started using SQL Server or want to get confidence through external validation that what you know is accurate then the Technical Specialist is a good first goal.  In order to prepare for this you’ll probably need to broaden your knowledge to include some SQL Server features you haven’t used before, preparing you for when you do use them.  This is an important step as it shows how certification will drive your professional development as well as deliver your certification goals.

If you take a Technical Specialist exam and get a good pass mark then you’re probably already on the way to achieving the IT Professional certification.  I’d recommend this level for anyone who uses SQL Server all day everyday and has done so for 1-2 years.  Passing this level will not only validate your knowledge, but also give you confidence to talk about SQL Server .  If you’re in a role that deals primarily with SQL Server then you’re going to have people asking you questions about it, you’re on the road to becoming your department’s subject matter expert.

When it comes to the Microsoft Certified Master there’s only one person who can tell you if that level of certification is a suitable goal for you, and that’s you.  Actually, your manager might have a view as well especially if you work for a company that’s passionate about SQL Server.  If you look at the requirements that microsoft.com gives for Microsoft’s highest level of technical certification and think “I can do that” or “I want to pass that” then you’ve just set your next goal.  It’s true that there’s probably a lot more thought that goes into aiming for this difficult and expensive certification but ultimately you have to have that “I want to pass” motivation in you otherwise the 6+ months of preparation will just be something you start tomorrow.  If you’re serious about wanting to pass the MCM then it has to become part of your everyday life.  I’ve met a few people who are “18 months into a 6 month revision plan”, they’ve just got side-tracked apparently.

The Microsoft Certified Architect is Microsoft’s highest level of certification for SQL Server and one which often gets overlooked.  Why?  I put it down to it not being a technical certification.  To me however its equally as important as any technical credentials you might have if you’re positioning yourself as one of the best in the world at deploying or architecting SQL Server solutions.  Being a Microsoft Certified Master is a pre-requisite for taking the MCA program which doesn’t have an exam.  Instead you have to submit a 40 page portfolio and attend a 6 hour Board in Seattle where you’re quizzed on design and deployment methodologies, communication and influencing styles, and risk and resource management amongst other things.  For me personally, its an exciting challenge.  After 2 years of tech-heavy MCTS, MCITPro and MCM preparation I need to balance my skills with the ability to walk into a blue-chip company and say “here’s the right SQL Server architecture for your business and technical requirements, and I’ll lead the implementation of it.”  That to me is powerful.

## Strengths and Weaknesses

One of the perceptions newcomers to certification often worry about is that you need to be an expert and get every question in an exam right in order to pass.  If you look at the learning topics and begin to see un-familiar subjects, perhaps impersonation, XML or CLR, you might think you need to be as good at those areas as you are in your stronger areas.  The good news is that to my knowledge, no Microsoft exam requires you to get 100% to pass, in fact the published pass mark for most if not all is 70%.  I’m not suggesting you can ignore entire areas of the syllabus, but at the same time you don’t need to get every question right to pass.  If you go into an exam knowing more about indexing than required but less about XML than needed you might be lucky and pick up extra marks to balance out your overall score.  Of course, there are quite a few topics you’ need to be familiar with the pass but knowing that its ok to get some questions wrong should help remove the fear that you need to be an expert in every part of the exam.  No one is an expert in everything, someone will always have a strength that makes up for a weakness elsewhere.

# Just what comes in the box?

As a SQL Server architect/consultant/advisor/advocate/you name it, one of my roles is to advise clients on the best solution for their requirements and their budget.  That last piece may seem like common sense but people often forget with SQL Server where the dividing line between the Standard and Enterprise editions sits.

Knowing what comes in each edition, or more commonly what doesn’t come in Standard Edition especially in the area of high availability, is critical for me when I’m designing solutions for clients.  So, with the recent public release of SQL Server 2012 my hunt for the “official SKU breakdown” began.  There have been licensing datasheets released since late last year but they were too high level for me and the marketing terminology didn’t always translate into technical features.

Fortunately, TechNet has a feature by edition breakdown well documented on a web page here.  Some of what the page shows you could guess, such as AlwaysOn Availability Groups being Enterprise Edition but 2-node FCIs remaining in Standard, however there were lots of surprises, caveats and footnotes which caught my eye.

I’ve yet to test some of the exact scenarios the web page described, but here is a list of the licensing breakdown which caught my eye:

• Database mail, SSDT, Intellisense, and version control support are not available with the 64-bit version of Standard edition
• There are a lot of advanced SSIS features and adapters only in the Enterprise edition and not in the BI edition
• Database mirroring is still listed as being supported in Standard and BI editions, just deprecated
• Database Tuning Advisor has limited functionality in Standard Edition
• The BI and Standard editions are limited to lesser of 4 Sockets or 16 cores
• Standard edition also has a “Maximum Compute Capacity” limit, explained in a complex way here
• The database engine in BI edition is limited to 64GB of memory but SSAS and SSRS have no memory limitation

I don’t intend to review each observation, instead make a general observation that if you’re looking in Microsoft’s words to deploy more than “non-mission critical” workloads on Standard or BI editions you should check the SKU breakdown first.