HOME | SIGN UP | ACCOUNT | SUBSCRIBE | TROUBLESHOOTING | LOGIN
News · Meetings · Jobs · Downloads · Universal Thread articles · Web articles · Conference coverages · Consultants · Blogs · FAQ · User groups · Photos · Videos

2007 PASS Community Summit

The Professional Association for SQL Server (PASS) will host the industry’s largest user-run educational event exclusively dedicated to the SQL Server in Denver, Colorado on September 18 – 21, 2007. Attend the 2007 PASS Community Summit, the largest event of the year exclusively dedicated to SQL Server education. Get the largest and most comprehensive coverage of SQL Server 2005 anywhere while you develop and expand your knowledge of SQL Server 2000. Meet and network with members of the Microsoft SQL Server development team, SQL Server MVP's and other SQL Server professionals. Check out the world's premier SQL Server vendors at the largest, most comprehensive tradeshow dedicated to SQL Server. And, much more!

Date:September 18-21, 2007
Location:Denver, Colorado

This conference has 78 pictures. We display 12 randomly selected pictures here. You may click here to view the album thumbnails.

PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007
PASS Community Summit 2007

Welcome to our coverage of the 2007 PASS Community Summit in the mile-high city of Denver, Colorado.  This year's summit is held at the Colorado Convention Center. The weather has proven to be beautiful although a bit "chilly" for attendees who live closer to the equator.  Denver offers beautiful views of the mountains and architecture which has made this trip all the more appealing. 

Upon registration you are gifted with a comfortable and large shoulder-sling backpack (which makes attendees readily identifiable on the streets of downtown Denver!),  conference schedule, and the always popular instructions on where to get the SWAG!  Each attendee has full access to WiFi and the summit site to download all the presentation material as well as scripts referenced in each session.

September 18

by Jean-René Roy

by John Paul Cook

Query Hints
Kalen Delaney

In this preconference session, Kalen Delaney provided a in-depth explanation of how and when to use query hints.  If you ever have a chance to attend a presentation by Kalen, you should definitely take advantage of the opportunity. You are certain to enjoy both the content and her presentation prowess.  She began by providing an explanation of what a query hint does. In English, a hint is a gentle suggestion, but in SQL Server it is not gentle, it is a directive.  If the directive is wrong, a hint can generate an error. The query optimizer can ignore a hint, so you need to look at the query plan to confirm it is used.  Usually the optimizer will take the hint, but not 100% of the time. For a while, it looked like Microsoft was backing away from hints.  But people have started writing more and more complex queries. New features in SQL Server 2005 offer more opportunities for complexity.  Now MSFT has added new hints. Kalen said she is more likely to use to hints, but she provided a cautionary note.  The big problem is that by using hints, you take away the advantages of the highly engineered improvements that may come with a service pack.  Over time, your data can change, and your hints can become out of date. Hints have their place, but they should not be overused. 

Table hints primarily have to do with concurrency and locking. Main exceptions to this general rule are INDEX and FASTFIRSTROW.  The WITH keyword is sometimes required and recommended in all cases. Originally Microsoft was going to require its use with hints.  Kalen recommends using it because it may be required in the future.

Join hints require that you specify the type of join (INNER, OUTER). A query hint affect only one join between two specified tables.  If you need a query hint to apply to a multi-join query, you can put a hint in an OPTION so that all tables are affected.  Kalen showed examples to provide clarity to a potentially confusing aspect of using a join hint.  First you must run the query without a hint and find out what type of JOIN the optimizer chooses.  Then, when you create your join hint, you must specify the same type of JOIN. For example, if the optimizer chose HASH JOIN, you must specify a HASH JOIN.  If you instead specify MERGE JOIN, you will get an error.  When you use a join hint, you control the order of the join. The first table specified is the first table used in the join order. 

RECOMPILE is useful in batches with local variables. When you first compile a batch with a local variable, SQL Server doesn’t know the value and may not come up with a good plan.   You can use RECOMPILE for those cases where the optimizer doesn't come up with a good plan on its own. It forces the optimizer to use current values of local variables instead of a cached plan.

The USE PLAN query hint is for when the optimal plan requires something not forcible through other hints.  It is useful for enforcing TABLE or JOIN hints with a plan guide.   USE PLAN is also good for operations for which there is no hint, such as forcing a seek instead of a scan.  It can also force when a sort is done, either before or after an aggregation.  Suppose you have a query that runs fast on one system and not on another. You use the fast system to create a plan guide. Then you force the slow system to use this plan.  Using a plan guide is particularly valuable for optimizing applications for which you can't change the queries.  For example, even if you have access to the code of a third-party application, it may violate the support agreement for you to change the code.  Or you may have a custom application that you can modify, but there's just too much bureaucracy to go through to make changes.  By using a plan guide, you can avoid these issues and still be able to force your own optimizations.  Additionally, it is easier to remove a plan guide than to change a query, so it makes it easier to undo your changes.

In closing, Kalen advised that hints be used carefully and to always use the least invasive hint first. Lastly, things change over time, so it's necessary to monitor your changes and see if they are still appropriate.

September 19

by Jean-René Roy

by John Paul Cook and Kim Morgan

Keynote Session
Kevin Kline, PASS President

Kevin started the conference by informing the audience that Microsoft and Computer Associates (CA) founded PASS in 1999.  He talked about the need to protect databases from the users and what kind of rules the RDBMS industry has come up with after 20 years of history.  He said that one of the things that would be addressed in this conference are best practices and lesson learned.

Kevin talked about PASS as an organization. PASS has significantly reduced its operating expenses, which has enabled funding for chapters to be increased by 50%.  Although the PASS reserves are lower than industry benchmarks for other professional organization, they have been able to maintain their reserves.  PASS 2007 has 30% more offerings compared to PASS 2006. Kevin said PASS 2006 was kind of a high water mark. Being in Microsoft’s backyard helped attendance.  Attendance for 2007 is slightly down. Overall, total attendance is down about 10-15%. Among those who are attending, the Product Support Services team is here in force.  They are the ones who take the calls about SQL Server breakdowns. They deal with all of the problems and have a great deal of knowledge.  The Customer Advisory Team (CAT) is also here. They are among the brightest people and they write whitepapers.  Not only that, they are wearing the brightest shirts, the bright orange shirts. CAT members are staffing the lounge along with MVPs.  The lounge is a great place to visit to get answers to difficult questions and Kevin encouraged people to take advantage of the great resource the lounge is.  He went on to discuss the session for women which has been highly rated in the past.  After requesting a show of hands, Kevin said over half of the attendees have attended a previous PASS conference.  This year, the DBA track is 26% of the total content. BI has grown to 24% passed on feedback from prior years.  There are now 25,000 PASS members with over 100 chapters around the world. Next year’s PASS conference will be in Seattle again.  A new management company was hired to reduce costs and improve the level of service. New governance and financial processes have been instituted.  Every year a PASSion award is given. This year’s award was given to Denise McInerney for her exemplary volunteer activities.  PASS has no paid staff, so getting work done by volunteers amplifies the organization. All of the volunteers were recognized for their efforts.  Kevin encouraged everybody to vote in the election for board members.

Ted Kummert, Microsoft Corporate VP, Data and Storage Platforms Division

Ted started off by saying this is his first PASS since taking over the lead of SQL Server in January of last year.  He believes he has the best job in Microsoft. There is plenty still to do. The next major release is SQL Server 2008.  Many of things being shown have been released to the public. The first version of SQL Server was released in 1989.  SQL Server 7.0 was a watershed release that left the Sybase code behind and offered a great TCO.  It was the first database product with OLAP services, the first step in BI offerings. SQL Server 2000 was considered by many to be the first enterprise release.  It was followed by SQL Server 2005, a very large release. Ted said SQL Server has come a long way. There are many benchmarks of success. 

Important benchmarks

  • More units than Oracle and IBM combined
  • Fastest growing BI vendor
  • Over 619K trained Pros
  • Winning on scale and value (1TB and 3TB)
  • Security focus showing long term results
  • No critical security issue since 2002

Looking ahead, there is a data explosion. It is being driving by the evolution of new data types, which can include video and other large size data types.  Flash memory enables mobile devices to be data rich. All of these types of data need to be supported by a common, efficient model.  It needs to support all tiers as well as data in motion. The products must enable users to extract value from the data.

Microsoft has taken the “Your Data Any Place, Any Time” approach to SQL 2008. In this, the focus has been placed for the following categories:

  • enterprise data platform: from infrastructure, security, scalability and reliability
  • beyond relational: from facts and figures to sights and sounds
  • dynamic development: from timing the solution to increasing agility and productivity
  • pervasive insight: from getting questions answered to more users getting specific questions answered more directly
There are many new features added to SQL 2008 that will excite not only the technical administrator but the business visionary as well.
  • Transparent data encryption without an upstream effect on applications
  • Auditing is taken up a notch to capture all actions to provide insight for compliance by collecting all information in the db and servers through policy based control
  • Infrastructure Optimization with use of better predictors so that each set of information, in every conversation, is tuned to get the best performance
  • Increased reliability
  • Predictability is enhanced with plan freezing in the engine followed by lock-down
  • A Best Practices Analyzer is included to get better visibility into the environment
  • Enhanced Integrated Full-Text searching by bringing the indices into the engine with a common application model
  • The resource governor helps organizations deliver predicable and reliable performance

Demo by Shishir Mehrota

The Declarative Management Framework (DMF) enables you to do two main things – manage proactively and at scale.  If you logon as administrator, a table created in dbo schema by default.  With DMF, it is very easy to create a policy to prevent a table from being created in the dbo schema.  Management at scale allows a policy to be run against an entire server group with a single operation.  The resource governor prevents runaway queries from taking too many resources. It uses resource pools to set maximum allowable resource levels for users.  As an interesting diversion in his demo, Shishir showed the new Intellisense feature in SSMS.

Demo by Jason Buffington

The Data Protection Manager is for more than just SQL Server. People have tape backup systems today and they want something better.  All of the different types of data (Exchange, SQL Server, etc.) are managed as common objects. Backup can be to disk, tape, or both.  DPM offers continuous data protection. DPM simplifies the backup process from a compliance perspective.

Demo by Jason Buffington

You want good performance from a common application model. Any data that includes an address should be able to take advantage of spatial data.  The heart of “spatializing” (Shishir's trying to promote that word!) an application is geocoding addresses into a geographic point, which is a latitude and longitude pair.  There is a new distance operator for calculating distances between objects. Also, there is a new type of index to support rapid geospatial queries.

Demo by Erick Thomson

The ADO.NET entity framework allows you to work with data at a conceptual level.  LINQ works with the entity framework. It is of particular benefit to developers because it abstracts away the database in a very powerful and functional manner.

Demo by Erick Thomson

Insight into your source data helps you build better systems. The new Data Profiling Task in SSIS identifies patterns in your data.  It generates statistics which can be viewed by the standalone Data Profile Viewer. Among other things, candidate keys and data distributions are shown.  SSRS has a new scale break option to automagically insert a scale break into a graph or chart to prevent a few extreme values on a chart from overshadowing the scale of the majority of the data.

Demo by Erick Thomson

Pricing for SQL Server will remain the same. Microsoft will continue having SQL Server as a comprehensive product offering.

by John Paul Cook

Beyond Relational
Kevin Farlee and Michael Rys, Microsoft

Kevin Farlee started the session by referring to the data explosion mentioned in the keynote address.  He said that a large percentage of the data is not tightly structured relational data.  Digital archives of scanned government documents and audio recordings of speeches are good examples.  When blobs like this are put into a database, they are blobs without structure.  One goal is to reduce the cost by allowing you to have the same application manage all types of data cleanly.  The XML data type and full-text indexing of blobs helps out. SQL Server 2008 upgrades the XML data type.  The FILESTREAM feature helps with the storage and management of large unstructured data such as video and MRI.  It is very difficult to synchronize the file system and SQL Server today.  Storing large blobs in SQL Server is expensive because SQL Server storage is more expensive than other types of storage.  SQL Server wasn’t originally tuned to store large blobs. The FILESTREAM storage attribute allows a blob to be stored on the file system itself but managed and controlled by SQL Server.  Client code doesn’t see the VARBINARY(MAX) data stored with filestream as any different from an API perspective.  However, the 2 GB limit is removed when FILESTREAM is used.  FILESTREAM is a modifier to the FILEGROUP keyword.  During his demo, Kevin forgot he was connected to master. The very attentive audience caught the error and provided help to keep the demo running smoothly.  Kevin set expectations by explaining that remote storage of FILESTREAM data is not supported in the initial release. Nor is DB snapshot and mirroring supported in this release.  SQL Encryption and table value parameters are not supported.  Another topic Kevin talked about is the API for integration with third-party storage systems.  The Remote BLOB Store (RBS) API can’t guarantee that a blob wasn't changed on a storage system (such as what Fujitsu or EMC Centera offer), but can ensure that the link is valid, that the blob exists.  By contrast, FILESTREAM does provide both link level consistency and data level consistency.

Michael Rys started his presentation by mentioning that SQL Server 2005 uses the CLR to provide same capability of looking into blobs.  In SQL Server 2008, full-text handling is improved by bringing blobs fully into the database.  A big goal is to bring full-text completely into the database to improve scalability and manageability.  Performance is improved in many cases because the optimizer has knowledge of the improved full-text indexes.  (At this point in the presentation, the lights went out because someone in the overcrowded room leaned against the light switch.)  There are new word breakers in SQL Server 2008. The number of locales supported is 40, up from 23 in the previous version. 

Michael mentioned the new spatial data types (GEOMETRY, GEOGRAPHY) but did not go into detail because there is a separate session on this topic.  He then talked about XML improvements in SQL Server 2008.  Since the native format for Office 2007 documents is XML, SQL Server 2008 fully supports the indexing of Office 2007 documents.  There is also new support for SharePoint documents.  Property bags can be used for storing annotation of files and documents. 

The SPARSE keyword can be added to a column definition to enable more efficient storage of sparse data.  What was particularly interesting in Michael's demo is how sparse data can be indexed.  The index is smaller and thus faster because it only indexes the column values that actually contain data. 

Michael also explained how SQL Server 2008 has improved support for hierarchical data.  This could either be the simple, classic case of an employee table that includes a self-reference to an employee's supervisor or it could be a complex bill of materials.  The HierarchyID is a CLR UDT with varbinary encoding offering such methods as GetDescendant and GetAncestor.  It does not have referential integrity to prevent breaking of the hierarchy after a delete.

What’s between Index Internals, Isolation Levels, Locking and the Consistency of Reads
Itzik Ben-Gan

As always, Itzik presented in a very large room that was well-attended and he started on time.  The session began with the fundamentals of data internals.  If you think there's nothing more to know about 8k pages and extents, you haven't heard Itzik speak.  When it comes to indexing in SQL Server, he is the master of detail.  The first key point he made about indexing is that in heap storage, page splits generally can only occur in the index, not in the heap.  Because no logical key ordering is enforced, heap storage data doesn’t page split. Inserts never cause splits in heap storage.  But an update can expand a column beyond the space available in the page and move the row data to another page leaving behind a row forwarding pointer. 

In a clustered index, data is organized in a B-tree. Data is ordered in index key order.  There is no guarantee of physical ordering. Logical key ordering does occur, but physical ordering of the data is a myth.  The leaf levels support sorting. The non-leaf levels support searching (seeking). 

Next, Itzik talked more about page splits. Say we have a new row, it has a key value. If the intended page has enough space, it is put in the page.  But if the page doesn’t have enough room, it will allocate another page somewhere. This results in updates to the linked list.  Logical order is preserved, but physical order is not.

There are two ways to scan the data from an index.  The linked list can be followed (logical fragmentation negatively impacts performance) or an allocation order scan can be done (which doesn’t care about logical fragmentation. 

Impact of a page split in a leaf level in a clustered index. Every time a row moves, all of the nonclustered indexes must be updated. 

In a data warehouse scenario where the data is actually read only during the day when it is queried, you should change the flag to read only.  This will improve performance because the storage engine doesn’t have to be concerned with the data changing during the read process.

READ COMMITTED SNAPSHOT gets a read consistent view of the data. It may be faster than NOLOCK without the risk of inconsistent results.  If the workload is primarily INSERTs and SELECTs, then the overhead is low because neither type of statement is row versioned.

References:
insidetsql.com

by Kim Morgan

Using SMO to Manage SQL Server DBA-306-A
Allen White, Advanstar Communications

Allen White has over 30 years of experience in Information Technology and has been working with SQL Server since 1995. He claims he got into SMO due to laziness, but one could argue that it was sheer genius to move to a method of automated scripting to do the daily, weekly and monthly tasks needed to keep the server up to date, documented, audited and in synch with the enterprise environment standards.

This session was a pull-thru session from last year's discussion but expanded to include PowerShell demonstrations to meet the requests from the previous attendees.

SQL Management Objects (SMO) is Microsoft's answer to the programmatic need to manage all aspects of SQL server through collections of objects. In other words, SMO allows for you to build applications which will follow your guidelines for managing your servers. SMO is built on the .NET Framework, so the hierarchical objects it exposes are available to you in your language of choice.

SMO provides support for features available in SQL 2005 and 2008 such as table and index partitioning, HTTP/SOAP requests, XML Schema NameSpace, XML Indexes and XML datatype, Full Text Search enhancements and Page Verify. It runs as managed so you will need to ensure you have all the references and to then import your NameSpaces. SMO is compatible with SQL 2000 on aspects that are available however will error out on features that do not exist such as Database Mirroring.

A clear differientiator from SQL-DMO is that while in DMO the objects are deeply embedded, SMO brings the objects out to behave as stand-alone.

Allen also touched on the scripting language PowerShell. PowerShell is a scripting environment for Windows and is similar to Unix shell scripting. Where it differs is that as Unix pipes text from one program to another where as with PowerShell it pipes object from one to the next so no parsing is needed on the other end.

PowerShell uses cmdlets, or little command line utilities, that a user can script to fix things that had been broken in cmd.exe.

While using a verb-noun naming convention, PowerShell is powerful enough and available to those who are not comfortable with or have access to Visual Studio. Get your free download from Microsoft but note that version 1.0 does not allow scripts to be run inter-server, howerver version 2.0 will meet this need.

TIP Use Visual Studio Object Browser while you learn PowerShell. You will have cleaner visibility into your code.

A great example Allen shared with the attendees focused on general miscellaneous management tasks.
  • run TSQL scripts against "most" databases : for example when a global change is required this is a way to manage this process and eliminates the error change significantly based on the criteria you determine
  • table/column maintenance : accessing tables and columns through object hierarchy. In this example imagine the application has large varchar columns so to change that in each table would be tedious to do that manually adding in the risk of user error, with PowerShell you can script this via SMO.
  • agent job creation : can handle most maintenance plans if built properly. In this example, you have heavy performance against your server and you have limited disk space. You can program for performance condition alerts to watch for problems (for example, transaction log exceeds 50% of space), fire the alerts that will trigger the agent to fire the job to backup the transaction logs.

Quick Notes about SMO
  • execute SMO via SQL Agent by calling the .exe compiled from Visual Studio
  • You can install the SQL 2005 Client tools on your SQL 2000 box to get the functionality if you want to use SMO over DMO
  • You can use MOM alerts to fire off SMO

Quick Notes about PowerShell
  • variables in PS use a $. v.gr. $s = server
  • an information variable exists. $s.information returns all information on the server properties
  • $p pulls up all the properties so then you can browse to it and then go back to management studio to investigate further if you don't have the details you need
  • $_ is the current object you are iterating through. ./ this is how you tell where the script is -- it doesn't assume the current directory

References:
Scripting Database Objects using SMO (Updated)
PowerSMO at Work Part 1: DBA Scripts and Functions
Managing SQL Server using PowerSMO
Automate your Database Integrity Checks Using SMO
Allen White's Blog
SQLJunkies

Disk Configuration for the SQL Server DBA
Allan Hirt, Avanade Allan has over 15 years of implementation and configuration experience. Allan had a deep and intense session covering myths and truths around disk configuration and performance, designing and monitoring solutions.

Myth & Truth 1 Drivers, BIOS and Firmware
  • MYTH: root of most hardware based disk related issue with SQL Server.
  • TRUTH: never update servers, HBAs and storage devices in the latest greatest "just because"
  • TIP: pay attention and verify STORPORT hotfixes/updates (KB932755) <- latest as of today
  • TIP: watch your systems like a "hawk"

Myth & Truth 2 Disk cache
  • MYTH: it will solve all your performance problems -- NOT!!!
  • TRUTH: watch out for shared cache
  • TRUTH: most disk subsystems are shared with other applications and I/O types. Be aware of what else is going on ... having enough disk space doesn't equate to having enough I/O. For example, as the questions: what else was going on at the time? was someone sending a large file?
  • TIP: optimizing is tough in this environment, so optimizing for one may hurt another. Robbing Peter to pay Paul won't always work.

Myth & Truth 3 RAID
  • MYTH: Does it matter? (Allan thinks old tenets of RAID 1+0 vs 5 v 1 should be revised.)
  • TRUTH: Optimizing for space versus Performance (how much do you need versus what is the big pictures of how will it effect the big picture)
  • TIP: You will need to plan for a 2x hit for RAID 1+0 and a 4x hit for RAID 5. This is why write performance is so far less than RAID 1+0. Unfortunately, you may have no control over the RAID.

Myth & Truth 4 Bottlenecks

MYTH and TRUTH: every system will have one

TIP: These bottlenecks could be memory, processor, HBA etc. So it is important for you to know your hardware design and where you fall short so you can then compensate for those weaknesses in some way. You can plan for more disk next year for example. Be sure to understand the trade-off that was made in order to make the solution work.

Allan then focused on designing the configuration. He pointed out a few key points as you begin the design process.

At the end of the day, the thing that matters most is performance. If you don't have enough I/O on the backend performance is going to tank. Database size may not matter in every instance because a small 1GB database may consume more I/Os than a 1TB database. You have to move away from the mentality of "here's your storage, have a nice day".

Another key aspect is to understand how to measure your I/O requirements. Each disk has "x" spindles I/O rating. Typically speaking you can expect 100-150+ I/O per spindle. So you must know what you application requires and will consume. Test test test! Test under a realistic workload. This means that running a single transaction or one heavy transaction won't give you the real results you need. However, you will be able to isolate a transaction and know how much it costs so you can do the math later and determine your hardware requirements. Use the testing cycle to tweak your test systems since more often than not, hardware is purchased before the application has made it to the development phase. It is more than "we're inserting this many rows and need this much space.". Remember to first worry about the I/O and performance.

TIP: Check out the following -- SQLioSim - KB231619, SQLio and ioMeter

Allan took the session deeper into this fascinating topic and shared his views on Dedicated versus Shared Disks. Generally speaking, dedicated disks and storage arrays are better 9 out of 10 times because you consider that the slice of disk versus the whole I/O bandwidth. You'll need greater number of overall disks in a shared subsystem to get the same performance from fewer dedicated disks. Also, be sure to have considerations for your availability. For example, knocking out multiple LUNs and multiple applications with one whack.

File Placement and Performance is a hot topic. What is the standard "best practice"? Generally speaking it's to separate out the data and logs. Allan asserts this is great in theory but hard to accomplish in reality. More often than not, you will have multiple databases, logs and backups on the same logical disk so you aren't really getting separation. The reality is that your customer will always ask for the maximum space and the maximum drive letters. Remember that not separating isn't necessarily a bad thing if it doesn't make sense to do so if you take into account I/O, sizing and up-front configuration, security and availability. Only separate out when it makes sense and will "hotspot".

Disk space is the #1 issue. Poor planning leads to downtime. Determine what you *really* need for retention, current availability, near-line and growth. Poor performance can sometimes occur although not straight away. A large misunderstanding happens around the actual available disk space. Remember that your raw disk space is not the total usable disk space. A rule of thumb is to assume half of the raw disk is usable disk after you've finished your carving and formatting.

Additionally, most application developers and DBAs don't know how to size applications and databases properly. It's not easy. Keep the thought "do you really want to shove it on the existing storage array because you can?". Taking the path of least resistance isn't always the best case.

TIP: track you growth, daily, weekly or monthly to get visibility into your growth so you can proactively plan.

Disk space also involves backups and backups retention. This is easily overlooked and depending on any SLAs or other business/regulatory requirements can consume a lot of space. Have you considered the requirements if you store it offsite? How much space do you need to hold all those tapes? Have you identified a place to test the restore? Don't forget about your transaction log backups! This will help you manage your growth.wth

Data & Log File Sizing is key. The initial sizing is crucial. Allan suggests that autogrowth should not be used to "save' you. Get more involved with your database server! Unfortunately it's become a crutch and encourages a level of laziness for the DBA. The growth can kill your I/O if you have another task going on at the same time another heavy I/O task is occurring; such as a bulk insert.

Other factors to consider:
  • hardware can mask or provide a band-aid for poorly written applications and queries
  • development does matter and will dictate what features you can/cannot be used, backup strategy etc
  • Design 101! Developers shouldn't assume it's a DBA or sysadmin's task to ensure performance by way of a hardware solution. Do your normal tuning, maintenance and troubleshooting and don't forget about tempdb and transaction log space and how it effects your I/O usage.
  • anti-virus can be a killer to your SQL Server. It can cause corruption and unnecessary I/O. You should remember to always filter your data, log, cube, backups and etc on every instance.

The bottom line? Don't assume. Do the work and remember that sometimes best practices should be taken with a grain of salt because there is not /diskgofaster or /diskgetbiggerwithnodowntime or even /iwantnoioproblems /switch available. Yet.

September 20

by Jean-René Roy

by John Paul Cook and Kim Morgan

Keynote Session
Joe Webb, PASS Vice-President

PASS is known for its educational value, but it is also about networking. The PSS and CAT members are here as well as MVPs, so there are people here you can learn a lot from by talking to them.

As a testament to how popular the SQL PASS conference is, Joe had to admonish the attendess to respect the Denver fire codes by not overcrowding rooms and by cooperating with room monitors when a room is at capacity.

The Women in Technology luncheon will discuss a variety of uses concerning women. Both women and men are welcome.

This year all evaluations are done online and everybody is encouraged to provide feedback to help make the next Community Summit even better. Also, next year’s Community Summit is Nov 19-21 in Seattle. Signing up now will get you the lowest possible rate.

The Microsoft Hands On Labs are both challenging and straightforward and well worth visiting. Microsoft also has a Focus Group meeting room that you are encouraged to visit and provide your thoughts to Microsoft.

Tom Casey, Microsoft General Manager of SQL Server

Tom said he’d been working a part of the SQL Server team for 11 years now. Back in the 4.2 days, there was very little community and very little in the way of best practices. SQL Server is continuing the shift from being just a database. The data platform vision and transformation is about providing the services and pervasive insight you need.

There are challenges with BI today and what can be done today and in the future.  Microsoft’s vision for BI is fundamentally unchanged for a few years. We’ve focused on providing the right data at the right time.  We think about all users in the environment.  Sometimes Microsoft gets feedback that too much attention is paid to end users, but Microsoft does care about all users in the organization.  You’ll see us responding to reporting needs in the organization. 

You need an infrastructure that you can trust and control to allow you to keep up with user demand with the TCO you need.

Improvements have been made for integrating SQL Server 2005 Service Pack 2 with Microsoft Office SharePoint Server (MOSS). demo by Lukasz Pawlowski SSRS has added integrations with WSS 3.0. All reports are published to the Reports Library. It opens in the Report Viewer web part.  This web part can be embedded in either a web page or a dashboard. The reports can be managed in SharePoint without going anywhere else.  End users can use Report Builder to do ad-hoc reports from SharePoint. Rights are assigned to the reports through SharePoint.  The SharePoint administrator has the ability to administer the reports without involving the DBA.

Tom continued by talking about performance management.  We’ve added to the portfolio and established the formal processes to do corporate performance management.  It is built on the solid foundation of SQL Server. BI is in everyone’s future. 

Many people don’t know how well Microsoft BI components work in a heterogeneous environment. These BI components work with products from other vendors.

There is a consistent implementation through the Microsoft BI stack. You don’t have to learn a lot of new things, you reapply the skills you already have.

BI has become one of those things business units get and it is hard to keep up with user demand.  They don’t want specialty products, they want products that work across the entire line of business application.

Getting project started quickly is a challenge and they want coherent reach throughout the organization.  People sometimes bring up the concept of a data warehouse appliance that includes hardware and software.  This only delivers for a small niche of the workload and it isn’t scalable or extensible.  But the message is that people want to get the time to value quicker. Microsoft has started working with its hardware partners.  Judy Chavis of Dell was introduced to announce a new Microsoft and Dell BI partnership.

Judy Chavis, Director of Enterprise Marketing and Business Development

Judy said that proprietary solutions require retraining and are typically very expensive.  Dell is Microsoft’s largest partner and has generated over 11% of Microsoft’s annual sales since 2004.  Dell uses SQL Server 2005 in key areas and Dell is a $60 billion business, larger than Google and Amazon combined.  Dell and Microsoft are announcing data warehousing reference configurations based on infrastructure you already know and use.  These jointly engineered configuration can scale to 1-4 TB at much less than the cost of proprietary solutions.

Tom said Dell is providing jointly tested reference configurations. It is a way to get people started fast.

Pervasive Insight is a theme. It’s important to think about features you use and the value you get that you can’t see.  Microsoft has changed the way it delivers SQL Server to get releases out faster. The commitment to low TCO is very important.

The biggest investment in BI Microsoft made is in the end to end solution.  It is easier to build the enterprise data warehouse because of data profiling, scalable lookup, change data capture, and MERGE.  It performs better because of optimized partitioning, STARJOIN, and grouping sets.  Administration is more efficient because of the resource governor, policy based admin, and data and backup compression.  Scaling to the using is accomplished through analysis optimization and cube design guides.

    For the developer, it is easier to build a solution and get it working.
  • Streamlined wizards with Best Practice Design Alerts
  • Better connectivity, C# scripting in transformations
  • Updated relations engine semantics including MERGE
  • Impreoved report designer with richer visualization, more flexible layout
  • Same great integrated development experience with Visual Studio

For the administrator, it is easier to maange and control
  • Data Profiling and advanced diagnostics Integration Services
  • Scalable backup, read-only marts and DMVs for Analysis Services
  • Simplified Reporting Services host process
  • Compression, resource governor, and Change Data Capture
  • Policy based, multi-server admin
  • New Performance Data Warehouse

For everyone
  • Datawarehouse optimizations with STAR JOIN and grouping sets
  • Partition Table Parallelism and Partition Aligned Indexed Views
  • Processing and rendering scale for Reporting Services
  • Write-back and subspace optimizations in Analysis Services
  • Continued great integration with Microsoft Office and new visualizations

Microsoft has improved Reporting Services rendering to improve performance.  Analysis Services has moved from being an adjunct activity to a core business activity.  Internal optimizations to improve performance have been made.  Some of these optimizations come merely by upgrading to SQL Server 2008, others require changes to existing applications.

demo by Richard Tkachuk, Senior Program Manager

Richard demonstrated the same Analysis Services query on both SQL Server 2005 and SQL Server 2008.  Subspace computation is what Richard calls the improved computation algorithm. This reduces the query time from 50 seconds to 2 seconds.  Richard said he didn’t want to oversell, that results do depend on the exact nature of the query.

Tom said your mileage may vary on performance, but you should expect performance to improve as you upgrade from one version of SQL Server to another.

Joe Webb closed by once again encouraging attendees to talk to PSS and CAT members in the lounge.

by John Paul Cook

Building Spatial Applications with Microsoft SQL Server 2008
Michael Rys and Isaac Kunen, Microsoft

Isaac started with a demo of a drilldown of restaurants in the Seattle area using Census Bureau TIGER data of zip codes, census blocks, and business locations. Virtual Earth was used to display the results graphically. He then provided background on the theory of geodetic data. There are two basic views of the earth – the flat earth and the round earth.

Maps represent the earth as a flat object. This can lead to inaccuracies in representations of geographic data. Flat representations of the earth are prevalent as legacy data. For example, the United States government created the State Plane Coordinate System in the 1930s. Much governmental records use this coordinate system. Microsoft provides support for this very important data through its GEOMETRY type, which supports planar data.

What is not immediately obvious to people in this age of GPS provided latitude and longitude values is that not all latitude and longitude values are created equal. Each of these lat-lon pairs is actually a value relative to a particular origin. There are over 400 well-known systems of how lat-lon values can be defined. It is necessary to understand the various lat-lon systems in order to properly process lat-lon values. SQL Server 2008 understands the over 400 systems defined by the Oil and Gas Producers (OGP) though the GEOGRAPHY type.

Whenever lat-lon values are being used, the GEOGRAPHY data type is the appropriate choice to make. All other applications would use the GEOMETRY data type. These data type are implemented as CLR UDTs and are referred to by Microsoft as system CLR types. As system CLR types, no assembly registration is required. In SQL Server 2008, UDTs no longer are restricted to the 8000 byte limit and are implemented as VARBINARY(MAX). This greatly increased size limit is necessary for the large sizes of some geodetic data sets. One person asked if the GEOGRAPHY data type could be used for other planets. At this time, it cannot do that. The Mars rover team will have to wait for a later version of SQL Server.

The GEOMETRY data type is useful for coordinate systems other than lat-lon data. An example is interiors of buildings.

To support complex geospatial queries, a rich set of methods is available.
  • Relationship tests – find out if two geospatial data sets are disjointed or intersect each other.
  • Construction methods – Create a new geospatial object that is the union or intersection of two other geospatial objects.
  • Metric functions – distances between two points, area of a geospatial object.

Michael explained that indexing of geospatial data is particularly important because of the large sizes of geospatial data and computations are quite expensive. SQL Server 2008 offers indexing capabilities over both geometry and geography data for certain common operations such as intersections.

SQL Server implements indexes as B-trees which are well-suited for linear data, but not two-dimensional data. It is necessary to map the two-dimensional data to the linear nature of a B-tree. It is implemented in a similar manner as an XML index.

Michael did a demo of a geospatial query that wasn’t indexed. It took about 44 seconds. Using an index, the query dropped to less than a second. Michael recommends that index hinting be used to maximize performance. To create an index for a geometry typed column, a bounding box must be specified in order to bound the gridding. You do not specify a bounding box for geography because the entire globe is always indexed.

Support for geospatial data is included at no additional cost in all versions of SQL Server 2005 except for the mobile version. SQL Server 2008 Express can take full advantage of these new features. The spatial library is usable on the client and an msi file is provided for a client side installation. The fall CTP is the first version of SQL Server 2008 to include the geospatial features.

Practical Performance Tuning
Andrew Kelly

Andy has over 20 years of experience with relational databases and is a SQL Server MVP. This session is not for troubleshooting but for supporting day-to-day tasks that DBAs should be doing.

It is easy to get overwhelmed with the quantity of data being monitored. Turning on all of the counters isn’t really practical. Too little data can lead to erroneous assumptions. Initially it is best to only get enough to narrow down the issues. If more data is needed for a particular issue, it can be added.

About one-fourth of the audience indicated they use third-party monitoring tools. There are many tools to choose from. Whether or not they are valuable depends on several factors including the knowledge and skill of the person doing the monitoring. SQL Server 2008 has additional monitoring tools available. One problem with any monitoring tools and particularly third-party tools is that they introduce an overhead. Third-party vendors tend to over-collect data.

Quest’s Spotlight is a good tool. Idera and Symantec also have good tools. But a tool is good only if you know what to do with it.

Perfmon is a Windows utility useful for SQL Server monitoring. It is useful for monitoring both OS and SQL Server counters at the same time. It is lightweight when used properly and it is simple to set up. It also integrates with the SQL Server 2005 Profiler, which is useful for aligning SQL Server and OS events.

What should be monitored depends on the unique needs of the organization. SQL Server has other 700 counters. The sys.dm_os_performance_counters Dynamic Management View (DMV) is useful for monitoring.

Andy displayed a list of useful standard counters. It’s important to create a baseline using data from standard counters. By comparing results to a known baseline, the DBA can be proactive instead of reactive. Without a baseline, it is difficult to know if a counter value is good or bad. One thing that many people overlook is the monitoring of tempdb. Some of the counters don’t mean as much as they used to. For example, if a SAN is used, you should use data obtained from the SAN vendor’s monitoring tools instead of traditional Microsoft disk performance counters.

Perfmon is a real-time monitoring tool. Logman and Relog are tools to pull Perfmon data into a table.

Profiler is extremely powerful and useful, particularly for performance related issues. It introduces a performance hit when it is running, which makes it better for troubleshooting instead of monitoring. Trace is the actual component of the SQL Server engine that does the work for Profiler. When done properly, Trace can create much less of an impact than Profiler. It is best to configure Trace to output to a file on locally attached storage. Tracing over the network to a remote file introduces additional overhead.

Trace can be controlled via T-SQL. Once you figure out the T-SQL that is appropriate for your needs, encapsulate those statements in a stored procedure.

Individual results are not as useful as aggregated results. You can review averages, minimums, and maximum values.

Tom Davidson, a CAT member, wrote SQL_Signature to prase trace TextData. Bill Graziano wrote the ClearTrace utility to parse traces. Microsoft’s ReadTrace80 also parse trace files. Itzik Ben-Gan wrote fn_trace_gettable. You may want to use or more of these tools.

Wait Stats provide statistics on whenever SQL Server has to wait. There are 192 different wait types in SQL Server 2005. These numbers are cumulative from the time SQL Server is started, although they can be reset by using DBCC SQLPERF. When SQL Server is waiting, it’s not doing anything, which is why you want to monitor wait states. Wait Stats don’t have to be collected too often, hourly may be just fine. They are easy to capture and report on. You really need to get a baseline so you have a context for evaluating your Wait Stats.

File Stats keep track of every time SQL Server must read or write to the physical disk. The results are broken down by file. Every database file in every database is tracked.

Keeping in mind that File Stats are for physical I/O, remember that logical I/O can be many times that of physical I/O. Don’t forget to pay attention to TempDB. Backups can skew the results if you don’t account for them.

by Kim Morgan

SQL Server Consolidation: Lessons Learned
Joe Young, Chief Architect with scalability Experts

Joe has been in the database management realm for about 15 years having started his career in the Oracle space and the moving to Microsoft and other technologies. Joe offered a demo-free session chalk full of valuable information to help you determine if it makes sense to go after a consolidation strategy within your enterprise.

The first question should always be "WHY consolidate?". You have the short-term benefits such as the immediate cost savings for your licenses, infrastructure and maintenance. Another consideration for the short-term benefit is that you can address any regulatory compliance requirements as well as halt, if not, reverse uncontrolled growth. You can immediately improve your hardware utilization levels. And remember that it is possible to spend a lot of money and still b e happy with a consolidation especially if it is dictated from a regulatory compliance need. Think "risk".

The long-term benefits include operational cost savings as well as providing the ability to expand and extend your enterprise environment.

When you are determining the full answer to the "WHY" be sure to consider items like the total number of servers and how many of those are in production. If your servers are more than 3 years old they are probably a good candidate for consolidation. Do you have management difficulties with your servers? And lastly, what sensitive or important data is residing outside of your domain controller? If you get the answer "none", are you really sure?

Good candidates for consolidation are those small to medium sized databases but acknowledging that it's more than just size that makes a catalog small or medium. Your read-only or mostly-reads databases are also solid good candidates for consolidation.

Average candidates for consolidation are those business critical databases that have a far reaching effect. Any of your VLDBs with low to moderate workload demands are also in this category of "average candidates".

Approximately 5% of your environment will call into the "poor and non-candidate" category. Typically, these are your business and critical applications with high requirements on availability, high transactions per second and/or complex queries, high and/or complex security and any hard coded paths.

There are 4 main consolidation misconceptions.

1. Always consolidate homogeneous/similar workloads. While generally appropriate it does come with caveats because the peak times will generally be the same so you have contention resulting in similar utilization characteristics. Go into detail when you study the workload and trap the trend. what is consumed by SQL Server versus what is consumed by the OS to service SQL Server is what is important.

2. Do not consolidate ISV application. This is more case by case so check with ISV because single-server requirements are often only for convenience. Ask the "why" on a single-server need meaning is it convenience or a technical requirement?

3. Always end in 1 central location. Regional consolidation is viable and even often practical.

4. Get buy-in from all parties/peers. While this is good to do, most of the time a consolidation is a result push-down from top. Executive sponsorship is critical for the success of your consolidation strategy because without this you will face additional challenges.

Joe shares with the session attendees questions to ask for consolidation on current technologies. Does it make sense to consolidate on the current platform or consolidate on the newest technology available? There are significant constraints for consolidations in SQL 2000 versus SQL 2005. Joe suggests you take a one or two step approach.

In the one-step approach, Joe suggests you consolidate straight to SQL 2005 if your environment will support it. The benefit of the additional features are worth the plan. Consider doing this for the low complexity and/or read-only databases or those databases that have a flexible maintenance window.

In the two-step approach you will first need to upgrade to the current version and then go through the consolidation effort. The common theme here is that you should, if your environment can support it, move to the current technologies if you are to embark on a consolidation initiative.

Another lesson learned is to be aware of your platform.

x64 benefits include faster clock speeds, longer history with multi-cores and will increase cores faster, more versatile and low initial acquisition costs. The disadvantages are limited scalability; lower consolidation ratio; SQL 2000 is run in WoW mode (limit on memory to 4GB) and limited fault isolation.

IA64 benefits include high scalability (64CPUS, 1TB memory) with a large cache up to 24MB. This large cache bumps up performance and allows you do a lot of work with low cache latency. IA64 has better parallelism. The disadvantages vary and exploit that not all software is supported on it but most are non-critical or things you should not have on a database server anyway. You will have higher initial costs and learning curve that can be considered a disadvantage. IA64 will run SQL 2000 but does not support and of the tools (like DTS). You can get around this if you don't log onto the console and do everything remote.

TIP: Anti-virus tools should be supported just be sure to use the right binaries so you don't bring the server down.

CPU/memory isn't what to focus on! Look at actual workload because it isn't about the raw horse power. Consolidation is all about the availability, reliability and manageability since those issues are more important and harder to solve.

HADR (high availability disaster recovery) strategy brings in CPU considerations. Always start with the default and use WSRM for your % utilization policies. This ensures CPU availability for the OS without under utilization. Use SQL CPU affinity settings for SLA management. SQL 2005 allows dynamic CPU affinity settings which are very handy in a consolidated environment. These settings can be changed on the fly without having to cycle the server! Again, remember to think peak loads and transaction load and give more when needed. Monitoring % utilization won't tell you much so monitor waits and queues and context switching.

Memory considerations come into play with a consolidation plan. Allocate memory per instance. Define a minimum SLA and monitor for OS paging. AWE is fully dynamic in SQL 2005 but will not deallocate the memory after claiming it. This is by design because deallocating the memory is an expensive operation and hard to determine how much you need. TIP: SQL Server 2005 ignores "set working set size" so don't bother.

IO separation is still recommended in your storage considerations but probably not for each database. Separate when it makes sense and based on the type and not the database. A large number of physical disks is still recommended using a RAID 1+0 for high performance and availability. Tests indicate 15-50% performance degradation with direct RAID5 configuration.

TIP: The wrong firmware can cause performance degradation and ultimately catastrophic system failure.

Network considerations was the last topic Joe covered in this session. Gigabit only. Really! Don't forget the bus. It's not about the raw CPUs. Beware of DPCs fixed on 1 CPU. Involve your network operations folks for topology and traffic considerations so they don't think there is an attach happening due to the increase of traffic from your server(s). Monitor your TCP ports and make it a habit to do physical inspections of your hardware following hardware upgrades, replacement or moves.

Finally a quick review of other considerations included HADR, logins and passwords, extended stored procedures, connections strings and network settings, shared services, MDAC version compatibility and CHARGEBACKS.

Joe offered a session full of questions and tips to help you on the road to a successful consolidation strategy if you start down that path.

Best Practices When Best Practices Are Not an Option
Sarah Barela, Manager of Database Services, Maximum ASP

Sarah has been a DBA for 11 years, only job ever had, specializing in HA and managing a large number of servers. Sarah enjoys consulting work so she can use technologies she would never get to use normally. She got the idea for this session from attending prior conferences and getting excited with all the concepts but hit with reality when she returned to work.

Best practices are relative, so you need to evaluate and manage your risks. You can manage this by learning how to audit your environment. Best practices are essentially expert recommendations for policies, procedures and configuration of SQL Servers. They provide you with a baseline for administrating the server and will evolve and change as your server evolves and changes. For example, backup your servers and do integrity checks and set that sa password!

Microsoft has a best practices analyzer downloadable from their site. Only run this during a maintenance window. It checks security, the database engine, SSIS and SSAS for example and will give recommendations on how to mitigate the issues.

Best practices will often contradict each other so evaluate and see if it works for you because they are only general recommendations. Write your own DBA policies and procedures for the best practices that work for your organization; even if you can't implement them today. Always remember that it will be an organic document so you can't walk away from it just because you finished the first pass. And just because you wrote it doesn't mean someone will read it.

The reality of best practices is that people don't like change and your business will have their own agendas. You have all those purchased applications where you can't change the code because of support agreements. The other reality is that budget will drive what you can implement and just how "best practice" you can be. If you have your documentation that you continue to update and bring up, so when that next budget cycle hits, you have more leverage.

Evaluate your risks because you know you will have to do things you don't want to do. So when you have to compromise your standards, know what your risks are by indentifying all the scenarios where you are vulnerable and then for each vulnerability list the ways to mitigate that risk. It won't mean anything if you didn't write it down, though.

Learn Defensive Database Administration and why it's about protecting your servers, data and ultimately yourself from vulnerabilities in your environment. Constantly be on the look-out for problems and communicate those vulnerabilities with the mitigating risks and what the consequence is for not fixing the problem. Isolate those servers that are most at risk.

DBAs are administrators and not technical writers but it's still important to document everything because if it wasn't written down, it just didn't happen. Share all the written documentation with anyone who has a remote stake in the success of your servers because everyone hates surprises.

DBAs aren't in it to make friends.

Auditing is a must for everyone but more so for those who don't have control. You might not prevent the problem but it can give you a lead in to mitigating the problem. There are risks to auditing too much and effect your performance.

C2 Auditing is an industry standard that collects all sorts of information for your server. Be careful because it will impact your performance and will quickly fill up drive space.

When you do your installation of SQL Server 2005 "default trace" is enabled. This feature provides the troubleshooting tools and assistance by logging the necessary data to diagnose a problem once it occurs.

Profiler is your best friend. Use it to audit add login event to failed login attempt to change password event. Get familiar with it and learn what criteria are important for your environment.

Use triggers to audit databases. Triggers are basically SQL statements that execute when an event happens and run under the security context of the user who executes the statement that calls the trigger. This is risky to a DBA because you can do a simple select that could delete all tables if someone was intending to be malicious.

DDL triggers are new to SQL Server 2005 and will audit changes made to the server and database but doesn't fire for temporary tables and for some common sproces like sp_rename and unfortunately doesn't support the INSTEAD clause.

If you have Service Broker and some expertise in event notifications, use the notification server to send you notifications based on events such as Profiler events and DDL triggers.

Things will go wrong and they will go horribly wrong all that the right time, so Crisis Management is necessary. Just don't panic and don't call everyone to help with the problem. More people might get it solved, but it doesn't make it go faster, usually. Work in a team of two. One of your team to do the work and the other to record what changes are done and to communicate to the users what's happening. Be reasonable and acknowledge when your personal bandwidth is at capacity.

check for weak or null passwords. Use a query like

use master
SELECT name, Password
FROM syslogins
WHERE Password is NULL

Be careful if you decide to use the SQL Server 2005 password policy because it isn't easy to get locked out or attack another user and bring down their site. The helpdesk might also beg you to turn it off.

You don't have to be sneaky with your security but SQL 2005 made the security permission granularity significantly more manageable. You can deny or grant permissions based on what your user really needs without all the risks that comes with being a "dbo" or "sa".

Having solid backups or access to your data is always promoted in the "best practices" conversations. If you do nothing else, backup your data! Plan for when your backups fail and all your data is gone. ALWAYS have a current copy of your database schema so that if disaster strikes you won't panic. Remember to not start backing everything up if disaster hits! You can cause more damage and create more problems if you get rid of those transactions. Invest in a log reader, it will save you if you are in a disaster.

You can get your way and not have best practices but still implement best practices for your environment. Understand your needs and the underlying reason for the resistance for standard practices and accept that folks just do not want change until a disaster occurs. All your auditing and documentation will pay off when your disaster strikes.

September 21

by Kim Morgan

Keynote Session

Friday's general session was by far the liveliest and most entertaining session of the week. Board Director Lynda Rab started the keynote by thanking all attendees and sharing that the Board of Directors success of PASS currently and in the future is directly related to the involvement of this phenomenal community.

Lynda announced the 2008-2009 Board of Directors Elect.

The four new Directors at Large are Rick Heiges, Paul Nelson, Greg Low and Erin Welker

The Executive VP of Marketing Elect is Bill Graziano
The Executive VP of Finance Elect is Rushabh Mehta
The President Elect is Wayne Synder

Wayne joined the stage and thanked everyone for their support.  He went on to share that PASS is the best week of his work life thanks to the networking and knowledge available to all attendees.  Volunteerism, Wayne shares, offers talented and smart people to rely on for help and networking. Wayne hopes to have a better year with many great new things.

Wayne took the opportunity to recognize Joe Webb and Kevin Kline for their long-standing involvement and support for PASS.  Both gentlemen have dedicated years of service to our organization and it was with their help, direction and support have grown PASS to what it is today.

Joe shared that this moment was bittersweet for him and with this great opportunity he’s had over these past six years has been invaluable.  He thanked the board of directors for their work and reiterated it has been his pleasure to serve now and in the future.

Kevin stated that “PASS is about you and your needs and where you need to go” and encouraged everyone to stay connected.  PASS offers your voice for improvements to the technologies we rely on for our livelihood.  “It’s about the vision and what’s unique about the board of directors is that they are all leaders in the general sense but work to make it happen; because we care.  They have the heart of a teacher and care for each other”, Kevin added.

The Dell Keynote followed the volunteerism awards with Senior DBA and IT Engineer Thomas Barta from Dell joining the stage.  Thomas gave an overview of the migration path dell.com took to upgrade and consolidate their SQL enterprise.  Four years ago Dell began benchmarking SQL Server 2005 64 bit and two years ago began the certification process for enterprise deployment.  SQL 2005 is the engine behind www.dell.com. Dell has over 500 servers and thousands of databases all running on SQL 2005 64 bit.  In Spring 2006, Dell marked a 40% boost in performance, out of the box, just from the upgrade from SQL 2000 to SQL 2005.  They were so happy with the success of that upgrade they began a two-phased plan to upgrade to 64 bit to answer the need to improve their infrastructure following a 200% increase in use during peak times.  Dell first upgraded to SQL 2005 64bit with multi-core servers with expanded memory. The results were astonishing.  Response time went down by 41%, transactions per second increased by 33% and the CPU utilization decreased an amazing 57%.  The lessons learned from this upgrade are that all servers will be 2U 64 bit with multi-core two socket system boards.  This will result in less rack space and power utilization.  Thomas also offered to keep your application migrations and new hardware deployments in synchronization with development cycles.  Dell then went on to give away free a computer to lucky PASS attendee Amy Laris.

Wayne Snyder joined the stage again to introduce Microsoft’s General Manager of BI Applications Bill Barker.  Bill began his keynote with sincere congratulations to the new PASS board members.  Bill took the session attendees on a humorous and engaging tour of Microsoft Performance Point Server 2007.

Performance Point Server has had four CTP releases and is now officially complete as of September 21, 2007.  With over 10,000 participants and 1,000 partners trained, Performance Point Server 2007 is on its way to being the leader in Business Intelligence applications.  Using the mantra of “The Right Data, at The Right Time, in The Right Format”, Bill posed the question of “why do this on top of the BI stack?”.  The simple answer is that when your users aren’t all on the same page, the data is subjective.  Transform how you drive business data performance by employing Performance Management.  Performance Management is complete BI and drives accountability by understanding why it is necessary to understand how the business is going.  BI is where the shared logic resides where Performance Management is where “the” corporate definitions reside.  Users manage accountability by writing it down so it gives insight to the plan versus actual.  Performance Management essentially is continuous business improvement by Monitoring, Planning and Analyzing the data and whose goal is to improve the process.  The BI vision has largely been unchanged since 1997. “BI is about improving the organization by providing business insights to all the employees leading to better and faster and more relevant decisions.”

Bill introduced Nathan Halstead, Program Manager for Microsoft and Scott Heimendinger, Software Development Engineer for Microsoft.  Nathan and Scott gave excellent and entertaining demos of Microsoft Performance Point Server 2007; while taking a few opportunities to make the crowd laugh.   Bill concluded the keynote with encouraging the attendees to check out Master Data Management. BI creates the need for MDM because it’s the reference data about key assets shared by multiple lines of business applications. It is reusable, uniquely identifiable, current, descriptive and contextual. Microsoft is attacking this topic with the acquisition earlier this year of Stratature. A technical preview will be available February 2008 with MOSS. Interested individuals should email mdmvibe@microsoft.com to be added to the distribution or to check out the website.

References

High Availability Alternatives
Ross Mistry, Principal Consultant & Partner, Convergent Computing

Ross is a published author of many books including a new book on SQL Server 2005 to be released next week with publisher SAMS. He is the leader of the SQL Server practice at Convergent Computing and specializes in SQL Server, Active Directory, Exchange and Operations Manager.

High Availability is essentially used to increase availability of servers that host mission critical applications by grouping servers to meet challenges such as server catastrophe, database unavailability, data and/or table failure, site unavailability and high uptime.

Clustering is one of four HA modes. Clustering is a combination of one or more nodes providing fault tolerance to protect the entire instance. A benefit of clustering is that it will appear as a single computer in your network and active directory so you can take advantage of role-based security policies. Clustering can provide seamless failover with a small outage that can typically last from a few seconds to a few minutes dependant on the resources. You must install Windows Clustering before SQL Clustering.

Typically you would implement clustering when you want to protect your entire instance or protect the whole server. Use clustering to reduce downtime during routine Windows maintenance windows or unplanned Windows failures. SQL maintenance would still require downtime.

Single Instance clustering (or more commonly known as active/passive cluster) is when the 1st node supports all requests and the 2nd node is idle and acts as a dedicated hot spare in the event of a failure. It is only accessed when a failure occurs and then becomes the active node. A heartbeat is used to monitor the health of both servers.

Multiple instance clustering (or otherwise known as active/active clustering) is when the 1st node acts as the primary node and supports all requests from instance 1 and the 2nd node acts as the primary node and supports all requests from instance 2. In this scenario either server can be a hot spare system if either node fails. You will have two separate instances of SQL Server running with independent databases. If a failure occurs, node 2 will how the databases, for example, and all the requests and resources from node 1.

Clustering supports up to 8 nodes in SQL Server Enterprise Edition and 2 nodes in SQL Server Standard Edition.

The components supported in SQL Server 2005 are the database engine, SQL Server Agent, Full-Text Searching and Analysis Services.

Best practices to consider when deploying HA with clustering is to use identical hardware and disk drives; avoid sharing same disks with your quorum resource and other cluster resources; disable NETBIOS on the heartbeat network adapters; do NOT use the same service account that is used for Windows Cluster and SQL Cluster; and remember to configure dependencies for shared disks within Windows Clustering.

Log shipping is the second of four HA options and is an inexpensive HA solution commonly used to maintain one or more standby databases and servers (called secondary databases). Secondary servers obtain updates from the primary servers on regular intervals. The basic premise is that you create backup, copy that backup and then restore on the secondary server at the database level. Failover is manual in log shipping, however. You configure the schedule and retention policy in a Management Studio wizard. Log shipping is supported in both SQL Server 2005 Enterprise and Standard editions and can be combined with other HA solutions such as clustering and mirroring. This solution is great for reporting, failover and delay recovery for human error.

Best practices to consider when deploying HA with log shipping is to allocate enough space to store transaction logs and to store enough backups to recover from interruptions. Do not duplicate your transaction logs and be sure to configure a retention policy at acceptable levels. Don't be afraid to be aggresive because you can lower the amount of data lost during a catastrophic failure. Keep into consideration and be aware of any and all network limitations.

Database Mirroring is the third of four HA options and was introduced in SQL Server 2005, although not officially supported until the release of Service Pack 1. Mirroring is HA for a specific database and is supported in both Enterprise and Standard editions. Proximity does not matter with mirroring and it is combinable with other HA technologies.

There are three operating modes with mirroring.

HA with high safety is an auto-failover configuration that uses a synchronous form of mirroring requiring a witness server.

HA with high protection offers high safety but no auto-detection all while requiring manual failover. This is a synchronous form of mirroring and does not require a witness server.

HA with high performance is an asynchronous form of mirroring without auto-failover or detection. A witness server is not required and performance is not eafected by network speed or distance. However, do keep in mind that transactional consistency is not guaranteed do data loss is possible.

Best practices for HA with mirroring is to use HA configuration mode when the principal and witness servers reside in the same location. Use HA high performance when the servers are in different physical locations. Both servers should have similar, but not necessarily identical, hardware configurations in memory, CPU and storage. Logins, scheduled jobs and extended stored procedures need to be created and identical on the mirrored database instance. Switch to HA high performance if delayed performance occurs. Database mirroring is faster than clustering so be careful adding mirroring on top of cluster in the event of a disaster, your data can be out of synch.

Replication is the final HA configuration mode and with SQL Server 2005, Peer to Peer replication was introduced. P2P replication is based on the establed transaction replication model where there is a publisher, subscriber and distributor. P2P uses bi-directional replication without the complexity of hierarchies. Since updates can be done at any node and conflict resolution is poor, transactional consistency cannot be guaranteed.

Best practices for replication include ensuring a backup is taken after topology is implemented. Script out all components for your DR strategy. Use replication monitor and performance monitory regularly to create baseline metric. Validate data consistency on a regular basis! Learn how to manually modify schemas and publications after replication has been configured.

Choosing the right HA solution is calculated by cost, the complexity of the setup and administration, need of ease for failover and failback, transparency of the failover, choice of OS and SQL editions, acceptable transaction latency and geographical dispersion.

References

Data Protection Manager 2007 (DPM): How to Protect Microsoft SQL Server.
Jason Buffington, Senior Product Manager, Microsoft Windows Storage Solution Group

Protecting MS SQL Server is new in DPM 2007. DPM 2006 has been out for two years and worked at a file system, disk to disk only solution. So essentially, the best you could get would be the flat files associated with your database. It was configurable with 2006, with certain conditions that are explained in KB910401. DPM 2007 ships with SQL Server 2005 Standard Edition with no additional licensing but it is locked down for use only with DPM.

DPM v2 aka DPM 2007 combines the best aspects of file on file advanced CDP and traditional tape backup into a single and reliable data protection solution giving continuous data protection. All the while, DPM offers rapid and reliable recovery from disk instead of tape making use of the built-in VSS writer. This is the secret sauce to having compatibility across all the platforms. At a cost of only $500 per server, it is a viable solution to protecting your data.

A lightweight agent (enterprise or standard) sits on each server doing two things. First it will invoke an assessment to check for deltaS and for transaction based applications it will grab transaction logs every 15 minutes. While grabbing the transaction logs it is important to note that no CHECKPOINT is forced because the grab happend after the transactions are closed. DPM sends a command to SQL alerting of its intent to invoke VSS and SQL perpares the logs.

Jason then went on to demo configuring DPM. You configure the protection in the same view as you manage your data. Disk to disk, disk to tape and disk to disk to tape are all valid configurations. The key questions to ask and analyze at this stage is "how much data do I need to keep?" and "how often do I need to synchronize?". It will follow the FIFO approach.

Express Full backups offer block level synchronization from your production database to a replica database. Express full locates the files that makeup the database and then determines what blocks on disk make up the data to be protected. DPM does not consume any additional space on the SQL Server as it uses a RAM based table. Generally speaking it will consume 1MB per 128GB of disk. Express Full isn't a "full backup" in the sense all data is backed up but "full" in that at the end of the snaps, you have a full representation of the data at that point in time. This is less intensive on IO than log shipping but not applying transaction logs at the target but holding as differentials instead allowing you to rollback to within 15 minutes.

DPM 2007 can hold up to 512 shadow copies plus their logs. So, for example 1 express full/week = 512 weeks of restorable data to 15min if you had the disk to do so. More reasonable and realistic is 30, 60 or 90 days. A further example is: 512x x 7d x 24h x 4(15min) = 344K recovery points, but again, that isn't realistic.

DPM 2007 will not alert you if add a database to the environment but if you make use of MOM you can be notified to add via DPM.

VSS is only invoked once per day with the Express Full backups so the services are only hit the one time and you will not take a hit to SQL Server for the transaction log grabs. DPM inherently maintains where left off for deltas while not leaving a marker of where it left off. This is not to be run in parallel with other 3rd party backup applications. It has been designed to be "THE" backup solution not "A" backup solution.

Jason went on to demo restoring data. You are given the capability to restore to tape, to disk, to the original instance or to an alternative instance.

This would make for a complimentary application to any high availability environment.

by John Cook

What Has Changed? Tracking and Capturing Data Changes in SQL Server 2008
Mark Scurrell, Microsoft

There are third-party tools and custom approaches such as triggers that people use to track changes.  Each solution has its own unique combination of cost, complexity, and overhead.  The same is true of auditing. Although change tracking and auditing aren't necessarily the same thing, they do overlap each other. 

SQL Server 2008 has new features for both tracking changes and for auditing.  It's important to understand that there are two separate and distinct issues whith tracking data changes.  For some use cases, the only thing needed is the knowledge that a change occurred.  In other use cases, it's necessary to know what data changed.  This requirement could be as simple as needing the know the most recent value or as comprehensive as maintaining a history of all changes.

Change Data Capture (CDC) is a comprehensive data change tracking feature huilt-in to SQL Server 2008.  To track all of the data changes to a table, CDC creates a copy of the table being monitored.  Additional columns are added to the copy of the table. These additional columns record who made the change, the nature of the change, and when the change was made.  If you have a large volume of changes you need to track, you'll need to plan for your additional space requirements.

Change Data Tracking is a lightweight feature for letting you know which rows have changed, so this is similar to a trigger in some respects.

SQL Server Q&A
Ken Henderson, Microsoft

The room monitor had to tell people not to sit on the floor because of Denver fire codes.  Initially it appeared that some people would be turned away, but instead, the meeting start time was delayed to provide time to open a wall and expand the room.  More chairs had to be brought into the second, now adjoining, room.

Kevin Kline opened the session by asking how many people attended Ken's presentation last year. About 3% raised their hands. Kevin introduced Ken as a fellow fan of Delphi.

Ken is the author of The Guru's Guide to Transact-SQL. He said he doesn't consider himself a guru, but once a marketing person found out about a reviewer's note that this was a guru's guide, the title stuck.  In response an audience question, Ken said he went with Addison-Wesley as the publisher instead of Microsoft Press because of the company's reputation as the premier technical publishing company.

Kevin asked Ken how he became an author. A publisher called him and asked him to write a book after he'd written many magazine articles. Kevin first book experience was similar.

Does Ken stick to his table of contents or change or add to it as he goes along. "Both", Ken replied to laughter from the audience. In response to another question, Ken said there will be a guru's guide for SQL Server 2008.

As a respected author, does Ken have access to restricted information. Ken replied "Yes", but then he pointed out that he works for Microsoft.

Kevin asked Ken what is favorite language is? "C#" was the answer. Kevin asked members of the audience to raise their hands if they were primarily DBAs. Then he asked them to keep their hands up only if they were also competent C# programmers. Kevin estimated 1 out of 60 people indicated this dual competency.

For those DBAs who are T-SQL savvy but not C# skiiled, is there a book for DBAs wanting to learn C#? No one had a specific book recommendation.

Should Visual Studio be used instead of SSMS? Ken said no.

Ken discussed how he was hired by Microsoft. He had to keep turning down offers because he didn't want to relocate or travel. Finally he was able to get an offer that meet his requirements. He is sort of the perf guy on his team.

Kevin asked Ken to talk about Connect (http://connect.microsoft.com/sql) and what it is. It is where the user community gets to enter bug reports and suggestions. < Going from development to supportm, has it changed the way he develops? Ken said that yes it had, and that he is much more sensitive to supportability issues and designing it into the product.

Is there going to be a gui around Service Broker? Honestly don't know.

Where do you see the SQL Server tools going? They are headed in a good direction. Earlier in SQL Server, problems people didn't care about were being solved. Microsoft wants to change the context of SSMS depending on the kind of work being done, similar to Visual Studio.

How about opening the internal bug database to the public? There is some visibility and there is the ability to vote on and comment on bugs.

Are we going to get a UI for SSIS other than Visual Studio? Don't know. Can the BI tools be made more intuitive?

How do you balance Microsoft implemented features and leaving something to third-party vendors? Ken's personal opinion is that Microsoft puts features in the product that customers want. Microsoft isn't out to make money from the tools, so third-party vendors are not viewed as competitors. Kevin, as an employee of Quest, backs this up.

SQL Nexus is available for download. For details, see http://www.sqlnexus.net/

Any thoughts on the future direction of SQL Server? Ken said "I think we're headed in a good direction" and that it is a very competitive market with other database vendors in the market.

What version of SQL Server will only ship on 64 bit? Don't know. Kevin said 64 bit is a huge performance gain and he thinks the only way to get people on it is to stop offering 32 bit. Ken agreed.

What time management tools has Ken as a technical person have found to be the most valuable? Ken said the hardest thing is achieving a balance. If the balance isn't kept, the quality goes down. He said he won't work himself to death again. He's trying to get more organized in smaller chunks instead of until 5:00 am. He's trying to see the need to exercise, eat right, and still be a dad and husband. Kevin said his experience is similar. There is no time for television in his life now. He does keep the Sabbath, he doesn't do any work on Sunday.

How can SQL Server be an enterprise product with broken service pack releases? Ken said things are being done to improve things.

Are you working on anything new and are you writing anything now? Ken is writing some SQL Server 2008 books to be released next year.

Kevin explained that the Q&A format enabled Ken to attend PASS without putting excessive preparation demands on him. Kevin said he wants feedback on whether or not attendees would like more sessions in this format.

Jean-René Roy, Technologies Softdesign Inc.
Jean-René Roy has been developing IT systems for large and small enterprises since 1987. In 1991 he founded the consulting firm Technologies SoftDesign Inc. Since that time he has worked on projects for Bell Canada, CGI, ArchiDATA, House of Commun, InnovaPost, Kraft, ADN Medical and many other clients. He has been and is still very active in the developers’ community by acting as a user group leader for more then ten years with the MFUG, SSMUG and GUVSM user groups in Montreal and the Ottawa.NET and OttawaSQL.NET UG. Jean-René is also the founder and president of www.DevTeach.com / www.SQLTeach.com an international developer’s conference. Jean-René Roy is a SQL Server MVP.

John Cook, Sogeti Usa
John Paul Cook is a Microsoft MVP for virtual machines and works in the Houston office of Sogeti USA. He has been working with relational databases for over 20 years. In his spare time, he writes about virtualization and SQL Server for various publications. He has never played a computer game because he doesn't think it is possible to find anything as exciting as debugging a stored procedure.

Kim Morgan, Sogeti
Kim Morgan is a Principal Microsoft and Infrastructure Consultant with Sogeti, USA. She is a 12-year veteran of SQL Server database administrator work and has a passion for all things infrastructure. Kim is active in the local technical communities and enjoys learning new technologies. In her non-geek time, she is a certified ASA Umpire and enjoys spending her free time with her husband Marc and son, Dax.

Copyright © 1993 - 2014 Level Extreme Inc., All Rights Reserved · Telephone: 506-783-9007 Email: info@universalthread.com · Privacy & Security · Copyright · Terms & Conditions