Katmai and Orcas Beta 2 – Problems, Problems, Problems

Just tried to install Katmai (SQL Server 2008) on to my dev machine with Visual Studio 2008 on there. After installing, the Management Studio was not on the start menu – strange. As a quick look on the MSDN forum I found a pointer to this in the readme

4.5.1 Remove SQL Server 2005 Workstation Components before Installing SQL Server “Katmai”

Workstation components is an option under the SQL Server 2005 item in Add/Remove.


As uninstalling this, and repairing the SQL Native Client installation, still nothing.  I then re-ran the main install, and selected just the workstation components option.  This went ahead and installed a little bit more but then Windows reported that SSMS crashed, in the setup log saying:

MSI (s) (9C:8C) [16:37:06:048]: Product: Microsoft SQL Server “Katmai” Tools — Installation failed.

MSI (s) (9C:8C) [16:37:06:057]: Windows Installer installed the product. Product Name: Microsoft SQL Server “Katmai” Tools. Product Version: 10.0.1019.17. Product Language: 1033. Installation success or error status: 1603.

Things just went from bad to worse from then.  I used system restore to roll back my system and start again.  This time, I uninstalled SQL Server 2005 components first, install worked fine this time and didn’t fail.  However, when I tried to login to the server from Management Studio I was getting more errors:

A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 – No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233)

Within event viewer it said

The Tabular Data Stream (TDS) version 0x73090003 of the client library used to open the connection is unsupported or unknown. The connection has been closed.  [CLIENT:]

Same errors when I tried to login from another VM machine with Katmai on and also Visual Studio.  However with Visual Studio, I could connect using the SA login and OLEDB and also sqlcmd, even though I couldn’t from Management Studio.  The SqlClient provider is not correctly able to connect to Katmai, even when the server is working.

I tried to connect to Katmai working on another server and this also failed.

After a few hours of playing around I have given up! Not sure if I am going to attempt to uninstall Katmai or just flatten the machine.

Not sure why this is causing problems as Katmai and Visual Web Dev June CTP worked fine side-by-side. I think installing Katmai after Orcas was the killer.

Technorati Tags: ,

SQL Server 2008: Table Value Parameters (TVP)

In this post, I am going to give an overview of the new Table Value Parameters feature coming in SQL Server 2008. Table Value Parameters allow you to pass user defined tables between queries, but also between client and server. With them being tables, you can do various different things with them, for example querying, joining or inserting values just like you would with a normal table. Now, instead of having the query take a long list of parameters they can simply just take a TVP as a parameter.

To create a TVP, you have to define a user defined type and the columns which the TVP will hold. Here we create a type of Customer which can hold an id, their name and postcode.

CREATE TYPE Customer AS TABLE (id int, CustomerName nvarchar(50), postcode nvarchar(50))

Then we create a dummy table to store the information.

CREATE TABLE Customers (id int, CustomerName nvarchar(50))

CREATE TABLE CustomerPostCodes(id int, postcode nvarchar(50))

We now create a procedure which takes a single parameter, which is a TVP. Based on this, we can insert the data into two separate tables, however from the outside its just a single object and a single stored procedure being called.

CREATE Procedure AddCustomers(@customer Customer READONLY)

SELECT id, CustomerName FROM @customer

INSERT INTO CustomerPostCodes
SELECT id, postcode FROM @customer


One important point when using TVP’s as parameters is that they must have the READONLY attribute. Another important point, the TVPs are actually temp tables stored on the server in tempdb. As you can see, the stored procedure inserts the records from Customer, Customer could actually contain multiple customers into the related table by querying the table.

In order to use a TVP, we need to declare it like we would with a variable.

DECLARE @myNewCustomer Customer;

We can then use the table like we would any other table.
INSERT INTO @myNewCustomer VALUES (1, ‘Harry’, ‘NEW’)

Finally, we can execute the stored procedure and pass the table as a parameter.
EXEC AddCustomers @myNewCustomer

If we query the two tables, then it will contain the following records.

id CustomerName
———– ——————–
1 Harry

id postcode
———– ————

TVP will solve a lot of problems and allow for a much more streamlined experience. TVP allows for joins, queries etc, and for inserting small number of records (<1000 is recommended). for larger inserts (>1000) – look at the Bulk insert functionality.

But, they would be a bit limited if you could use them only within SQL itself, so there is full support within ADO.net 3.0, using SqlDbType.Structured type. The parameters which can be passed in from ADO.net are DataTables, IEnumerable and DbDataReader. There is also support for ODBC, SSMS, and within Replication.

For example, adding a datatable as a parameter to the stored procedure from C# would have been something like:

DataTable dt = new DataTable();
SqlCommand cmd = new SqlCommand(“AddCustomer”, sqlConn);
cmd.Parameters.AddWithValue(“@Customer”, dt);

Now that’s cool. Hope you have found this useful.


Download my SQL sample here

Related Posts

SQL Server 2008- Change Data Capture (CDC)

SQL Server 2008- Declarative Management

SQL Server 2008- SQL MERGE Statement

Katmai June CTP

SQL Server 2008: Change Data Capture (CDC)

Another one of the new features within SQL Server 2008 is the Change Data Capture (CDC).  CDC is designed to capture insert, update and delete activity on a SQL table and place the information into a separate relational table.  It uses an asynchronous capture mechanism that reads the transaction logs and populates the CDC table with the row’s data which change.  The CDC table mirrors the column structure of the tracked table, together with metadata regarding the change.  In later releases, there is also going to be a audit feature allowing you to audit who accesses what information and when, based more on select statements.

One of the posts on the forums details how to setup CDC, it’s also in books online. To enable CDC, the following two statements are required. Note, SQLServerAgent must be running for this to work.

use testDatabase
–Activate CDC
EXEC sys.sp_cdc_enable_db_change_data_capture
SELECT is_cdc_enabled FROM sys.databases WHERE name = ‘testDatabase’
–Enable CDC on table
EXEC sys.sp_cdc_enable_table_change_data_capture @source_schema = ‘dbo’, @source_name = ‘Table_1’, @role_name = ‘cdc_test’
SELECT is_tracked_by_cdc FROM sys.tables WHERE name = ‘table_1’

The sp_cdc_enable_table… statement enables the tracking of the changes for a table, parameters are required for setting the source schema, the source name which is the table you want to track, and the role name, if the role does not exist it will create it for you.

After executing these statements you will have a number of new tables within your database.  The tables are:

  • cdc.captured_columns – Information about the captured columns being tracked.
  • cdc.change_tables – Tables being tracked.
  • cdc.ddl_history – Changes to the tracked table, details the command issued on the table.
  • cdc.index_columns – Guessing this is tracked indexed columns.
  • cdc.lsn_time_mapping – When a transaction starts and ends. Relates to rows in the tracked tables.

There is also a table for each table which is tracked, in my case I have just the one – cdc.dbo_Table_1_CT.  If we execute a change to the table, such as inserting a new record.

INSERT INTO Table_1 VALUES (1, ‘First Test’)

A new row is added to the table, with a record also being inserted into lsn_time_mapping.

__$start_lsn           __$end_lsn             __$seqval              __$operation __$update_mask id          TestMessage
0x0000001E0000006B001A NULL                   0x0000001E0000006B0018 2            0x03       1           First Test

If we update the table again using

UPDATE Table_1 SET TestMessage = ‘Updated First Test’ WHERE id = 1

Two records are inserted into the tracked table. One with the TestMessage ‘First Test’ (the original message) and the other with ‘Updated First Test’ both relating to a single transaction in the lsn_time_mapping table.  If we do

DELETE FROM Table_1 WHERE id = 1

We have a single row inserted into dbo_Table_1_CT saying which row was deleted.

Within the dbo_Table_1_CT there is a column __$operation.  This gives you the ID relating to the operation executed on the row.  2 = Insert. 3 = Before Update. 4 = Post Update. 1 = Delete.

If we change the design of the table while it is being tracked, a row is inserted into cdc.ddl_history with the command executed on the table.

source_object_id object_id   required_column_update ddl_command                    
—————- ———– ———————- ——————————–
1077578877       1093578934  0                      ALTER TABLE dbo.table_1 ADD Status nvarchar(50) NULL

However, if we execute another insert command on the table, the new column does not appear in the result set/dbo_table_1_ct.  It would be nice to keep the two insync and not have to worry about updating the table if it is being tracked. To get the tables back insync, I had to disable it (capture_instance parameter relates to the columns being captured I think) and re-enable the tracking. However, in doing so I lost all of the previous information – Boo.  The commands to do this are:

EXEC sys.sp_cdc_disable_table_change_data_capture @source_schema = ‘dbo’, @source_name = ‘Table_1’, @capture_instance = ‘all’
EXEC sys.sp_cdc_enable_table_change_data_capture @source_schema = ‘dbo’, @source_name = ‘Table_1’, @role_name = ‘cdc_test’

In case you are worried about this using up too much space, there is a sys.sp_cdc_cleanup_change_table stored procedure which can be used to remove entries up to a point in time.  You could then link this to a Job to archive the information, or simply report and remove.

I think that pretty much covers it. Another really useful feature, I have saw one or two attempts at auditing database activities like this and they are a bit hit and miss, so this will solve a lot of heartache. Looking forward to seeing what happens with the future auditing features.

One concern:  If a table is tagged as encrypted, is the cdc table also encrypted?  Will have to research this at a later date.

SQL BOL: ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_1devconc/html/7d514b05-7c4b-4ef2-8f16-b720df3bf44a.htm

SQL Server 2008: Declarative Management Framework

One of the new features within SQL Server 2008 is Declarative Management Framework, a new policy based management framework allowing you to define what can or cannot be done on the server itself which can then be redistributed across your network of servers for more consistent management.  Content based on the June CTP.

Three main components make up the framework

  • Policy management.  Create the policies from within SQL Management Studio.
  • Explicit administration.  Select and manage targets for the policies.
  • Automated administration. How and when the policies are evaluated.

Some of the important terminology includes:

  • imageFacet.  Set of logical properties that model behaviour or characteristics.  There are a number of facets built into the server, each of these have a set of properties for which you can set a condition on based on what the policy is.  For example, there is a Surface Area Facet which allows you to set if a setting should be on or off. 
  • Condition.  Expression of allowed states.
  • Policy.  Condition and the expected behaviour.
  • Policy group.  Group to help manage policies.  A policy can only belong to one group.
  • Execution Mode.  How the policy will be executed. 

What is a Facet again?

Facet is one of the core concepts within the DMF.  They set out all of the properties which can be used as part of the condition/policy without you having to worry about how to locate and access the information.  When you create a new condition, you say which facet contains the information you want, and then set a condition to the properties you are interested in.

image When it comes to creating a policy, the facet plays a part in what the policy can be targeted at.  The Multipart Name Facet can be targeted/applied to a number of different objects in the system while the surface area facet only applies at server level.  This makes managing the policy a lot similar as you only see the options which you are interested in.

If your unsure on where the property you are interested in is contained within which facet, you can right click on the object > policies > facet.  This will display all the related facets with all the properties set to match the selected object.  From here, you can also create a policy to define how it should be configured, makes it quick to setup a known baseline and tweak the policy from there.

What and How are they executed

DMF executes the created policies either on an ad-hoc basis, or more automated via Check on Schedule, Check On Changes or Enforce which are set when the policy is created, or changed at a later point.  Policies can also be set to disabled/enabled.  Policies can also be checked in test mode to make sure they are working correctly.  Check On Schedule uses a SQL Server Agent job to evaluate a policy.  Check On Changes uses event notification to evaluate based on when changes occur.  Enforce uses DDL triggers to prevent policy violations.  Notice, the first two do not prevent, just notify, while Enforce prevents the changes taking place.

Dan has done a really good write up on the details of enforce at Declarative Management- Behind Enforce. To summarise the main points.  Enforce works off a public DDL trigger which listens for all server and database events.  This is a single trigger and is updated with the events required which is based on the active polices, which in turn is based on the facet.  The DM engine runs inside the SQLCLR,  however it runs even if SQLCLR is set to Off as the Off mode still allows assemblies signed by Microsoft to execute.  An interesting point is that the DM is dependent on the DDL eventing and so can only enforce state on objects that are transactions.   If it is not a transaction, then it is not allowed to be enforce (option will be disabled) as it cannot be cleanly rollbacked if it fails.

How do you create a policy?

When creating a policy, the first thing you need to do is create a condition.  This is all built into SQL Management Studio, within the Object Explorer, under Management there is a new tree item for Policy Management. This contains all of the information relating to the facets, conditions and policies related to the connected server.

To create a new condition, right click Conditions or a Facet to display the create dialog, you will need to give it a name, choose the related Facet and then set the conditions.  You can do some really powerful expressions, however I do not think there is a way to do cross two facets.  Maybe giving this as a SQL statement would allow more flexibility.

Once the condition has been created, you can then attach a policy to that condition. Right click Policies and select new policy. You will need to give it a name, select the condition you just created, then select what the policy applies to, finally selecting the execution mode.  That’s it, the policy is now setup to manage the server.  A policy can only have one related condition, however you can have a group of policies.

 As already mentioned, you can create a condition and related policy from an existing database object.

Where are they stored?

Policies are stored in the msdb database but also can be exported as a not very interesting XML file.  They can be accessed via views, or via SQL Management Studio.


There is currently a known bug in the CTP which means the DMF does not work. As mentioned, when the CLR is off only Microsoft signed assemblies can be executed, however the Declarative Management DLL did not get signed as part of the build process. When a policy is executed, it causes the following error:

Msg 10314, Level 16, State 11, Procedure sp_syspolicy_execute_policy, Line 25
An error occurred in the Microsoft .NET Framework while trying to load assembly id 65536. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error:
System.IO.FileLoadException: Could not load file or assembly ‘Microsoft.SqlServer.DmfSqlClrWrapper, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91’ or one of its dependencies. Strong name validation failed. (Exception from HRESULT: 0x8013141A) —> System.Security.SecurityException: Strong name validation failed. (Exception from HRESULT: 0x8013141A)

You need to enter the following command and restart services to allow SQL to execute non-signed/strong named assemblies.  The ‘89845DCD8080CC91’ string is the Public Key Token for the assembly in case you was wondering.

“%ProgramFiles%Microsoft Visual Studio 8SDKv2.0Binsn.exe” -Vr Microsoft.SqlServer.DmfSqlClrWrapper,89845DCD8080CC91


The Declarative Management Framework is an excellent addition to the SQL Server toolset and I can see a lot of benefit for DBAs, no more going around 100 servers to check that the CLR is enabled, instead they can just execute an ad-hoc script or enforce it by default.

Dan Jones, who works on the SQL team, has lots of interesting information about this on his blog.


SQL Books online: