Thursday 17 December 2009

TFS merge filename collision error

The scenario is that I have a "Main" repository, this is branched to the "development" tree. Amendments are applied to the "development" tree, and when released this is merged back to the "main" repository. It was this merge back/check in that was giving the error. Seeing as there had been no changes to filenames and locations (there were some new files though), the collision error is a little strange.

Anyhow, to sort this out I force got latest for the "main" repository, overwriting all the files. Then I merged the "development" tree back, and check in pending changes then worked with no error. All via the GUI.

There are a number of non-gui based solutions by performing a baseless merge. I'm not sure of the side-effects of doing this, the solution described above seems to retain all history as expected.

Look here for details on a baseless merge.

Tuesday 24 November 2009

SSMS Tool Pack

A brilliant addon for SSMS 2005/2008, and it's free ! Features include:

•SQL Snippets
•Window Connection Coloring
•Query Execution History and Current Window History
•Format SQL
•Search Table, View or Database Data
•Run one script on multiple databases
•Copy execution plan bitmaps to clipboard or file
•Search Results in Grid Mode or Execution Plans
•Generate Insert statements from resultsets, tables or database
•Regions and Debug sections
•Running custom scripts from Object Explorer
•CRUD stored procedure generation
•New query template
•General options

Install now, if only for the colour coded connection windows !!

Tuesday 3 November 2009

SQL Server Memory Settings 64bit

Rather interesting blog posting here talking about SQL Server 2005/2008 64bit and memory settings. The recommendation is to always set these as the following table outlines:

Physical RAMMaxServerMem Setting
2GB1500
4GB3200
6GB4800
8GB6400
12GB10000
16GB13500
24GB21500
32GB29000
48GB44000
64GB60000
72GB68000
96GB92000
128GB124000


What makes this interesting is the comments posted, where one poster mentions that they have seen instability when the memory is not set.

Tuesday 27 October 2009

Run SQL Trace in background

Rather useful this if you need to monitor a trace for a period of time and can not leave the session logged in. Found how to do it here.

First use Profiler to define the events, columns, and filters needed. Some Events are : SQL:BatchCompleted and RPC:Completed, SP:StmtCompleted. Important columns are : Duration, CPU, Reads and Writes. Some advanced events are SP:Recompile and Scan:Started to check for table and index scans

Click the Run button. Immediately stop the trace

Click the File menu, expand the Export option, and then expand the Script Trace Definition option. Choose For SQL Server 2005 (or SQL 2000 if creating script for older SQL Server) and select a filename to save the script.

Once the script has been saved, open it for editing in SQL Server Management Studio.

The following line of the script must be edited, and a valid path must be specified, including a filename:
exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere',
@maxfilesize, NULL. The @maxfilesize is 5MB by default

Run the script. The generated script will also select back a @traceID

Once you are done use the @traceID to stop and close the trace:

EXEC sp_trace_setstatus @traceid=99, @status=0
EXEC sp_trace_setstatus @traceid=99, @status=2

The fn_trace_gettable function can be used to read the data from the trace file :

SELECT * FROM ::fn_trace_gettable('C:\Traces\myTrace.trc', 999) where 999 is the number of rollover trace files to read


Easy !

Monday 26 October 2009

Benchmarking SQL Server

A very useful blog entry on Benchmarking SQL Server.

The article looks at the PAL tool found here

The process is straight forward. Create a performance log for a period of time on the SQL Server you need to benchmark. Typical metrics to record are :

Physical Disk
Logical Disk
Process (I will sometimes tweak this down to just the SQL Server process)
Memory
Network Interface (sometimes I will tweak down to just the counters needed and just the NICs involved)
Paging File
Processor
System
SQL Server: Access Methods
SQL Server: SQL Statistics
SQL Server: Buffer Manager
SQL Server: General Statistics
SQL Server: Latches (Often I just look at the two required counters rather than the object)
SQL Server: Locks
SQL Server: Memory Manager

Save this for a reasonable slice of time and then run it through PAL. This then generates a nice report, highlighting any issues. See the above blog link for a more indepth description.

Thursday 8 October 2009

Find sql usage in a database

The following searches all objects in a database for a particular piece of text:

DECLARE @searchString VARCHAR(128)

SET @searchString = 'xp_smtp'

SELECT DISTINCT so.name, so.type, so2.name as source
FROM sysobjects so
INNER JOIN syscomments sc ON so.id = sc.id
LEFT OUTER JOIN sysobjects so2 ON so2.ID = so.parent_obj
WHERE text LIKE '%' + @searchString + '%'
ORDER BY so.name

Monday 5 October 2009

Collecting Query Stats

Interesting article here which provides a nice SQL to investigate stats of all statements executed on a server.

Get PID for a suspended or sleeping SQL Session

Quick Tip:

Profiler only works for sessions that are active, sp_who/wp_who2 do not give enough details, so to get any more information a quick check in sys.dm_exec_sessions will provide all the information needed.

Tuesday 25 August 2009

Index Defragmentation Script

Borrowed from MS help online pages :

SET NOCOUNT ON;
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0;

-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;

-- Open the cursor.
OPEN partitions;

-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;

-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
EXEC (@command);
PRINT N'Executed: ' + @command;
END;

-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;

-- Drop the temporary table.
DROP TABLE #work_to_do;
GO


And for a quick looksee on the state of the indexes in a database :

SELECT object_id AS ObjectID,
index_id AS IndexID,
avg_fragmentation_in_percent AS PercentFragment,
fragment_count AS TotalFrags,
avg_fragment_size_in_pages AS PagesPerFrag,
page_count AS NumPages
FROM sys.dm_db_index_physical_stats(DB_ID('Adventureworks'),
NULL, NULL, NULL , 'DETAILED')
WHERE avg_fragmentation_in_percent > 0
ORDER BY ObjectID, IndexID

Wednesday 5 August 2009

Stupid DataGridViews do not commit data to datasource when asked

I'm hoping I'm missing something really obvious here. The scenario is this :

1. You have a data grid view linked to a binding source linked to a dataset.
2. If you add a new row, move to another row, and then delete the row you added, all is good.
3. If you add a new row, save it (using dgv.CommitEdits), when you delete it you will get an error along the lines of the index does not exist.

This stumped me for quite a while. Turns out you must call the BindingSource.EndEdit. The dgv CommitEdits only stores them as far as the binding source. dgv.EndEdit only commits the current cell. As far as I can see there are no methods in the dgv to do this pass through. Even more annoying, you delete the row from the dataset, not from the binding source or from the grid.

It's little things like this that take far too long to work out.

Thursday 30 July 2009

Bootstrapping .NET 3.5 SP1 with Visual Studio 2008

This is actually fairly easy, EXCEPT for a nasty gotcha in the MS help documentation.

The instructions are here. In summary :

1. Download dotnetfx35 from here

2. Run dotnetfx35.exe /x

3. Copy files to C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\DotNetFX35SP1

4. Edit the product.xml as per the instructions:

1. Open the [Program Files]\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\DotNetFx35SP1 folder or %ProgramFiles(x86)%\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\DotNetFx35SP1 on x64 operating systems
2. Edit the Product.xml file in Notepad.
3. Paste the following into the < PackageFiles > element:



4. Find the element for < PackageFile Name="dotNetFX30\XPSEPSC-x86-en-US.exe" and change the PublicKey value to: 3082010A0282010100A2DB0A8DCFC2C1499BCDAA3A34AD23596BDB6CBE2122B794C8EAAEBFC6D526C232118BBCDA5D2CFB36561E152BAE8F0DDD14A36E284C7F163F41AC8D40B146880DD98194AD9706D05744765CEAF1FC0EE27F74A333CB74E5EFE361A17E03B745FFD53E12D5B0CA5E0DD07BF2B7130DFC606A2885758CB7ADBC85E817B490BEF516B6625DED11DF3AEE215B8BAF8073C345E3958977609BE7AD77C1378D33142F13DB62C9AE1AA94F9867ADD420393071E08D6746E2C61CF40D5074412FE805246A216B49B092C4B239C742A56D5C184AAB8FD78E833E780A47D8A4B28423C3E2F27B66B14A74BD26414B9C6114604E30C882F3D00B707CEE554D77D2085576810203010001
5. Find the element for < PackageFile Name="dotNetFX30\XPSEPSC-amd64-en-US.exe" and change the PublicKey value to the same as in step 4 above
6. Save the product.xml file


This is where the gotcha is. In the original instructions, there is a space before each PackageFile keyword, making it malformed. VS 2008 will then not recognise the product correctly. Just make sure that space is not there and it will all work.

Wednesday 29 July 2009

Using SQL Server Application Roles through C#

I've not had to do this with C# before (did it with Delphi no problem), and found that there is an issue using connection pooling.

The problem is that the connection seems to be in error once a role has been set. The test I performed was as follows :

1. Set approle
2. call a stored procedure to populate a datatable from c#
3. call the stored procedure again.

On the second call to the stored procedure, I got an error whatever I did. If you do not set the approle, you will get a permission denied error. If you set the approle then you get a nasty looking error telling you the connection needs to be reset. Either way doesn't work.

As I am reluctant to use a dedicated connection (for a start this would involve rewriting alot of code, not to mention any performance issues), I needed a proper solution. And here it is :

1. Open connection.
2. exec sp_setapprole
3. run sql
4. exec sp_unsetapprole
5. close connection.

The only tricky bit is that you have to get the @cookie from sp_setapprole and pass it into sp_unsetapprole to free it up. This is a varbinary, so needs to be handled in a byte[] type.

So, pseudo-ish code is

public DataSet getSomeData()
{
connection = getConnection();
connection.Open();
byte[] cookie = SetApprole(connection, approle, approlepassword);
....
call some sql using connection
....

UnsetApprole(connection, cookie);
}

public byte[] SetApprole(SqlConnection connection, string approle, string approlePassword)
{
StringBuilder sqlText = new StringBuilder();

sqlText.Append("DECLARE @cookie varbinary(8000);");
sqlText.Append("exec sp_setapprole @rolename = '" + approle + "', @password = '" + approlePassword + "'");
sqlText.Append(",@fCreateCookie = true, @cookie = @cookie OUTPUT;");
sqlText.Append(" SELECT @cookie");

if (connection.State.Equals(ConnectionState.Closed))
connection.Open();

using (SqlCommand cmd = connection.CreateCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandText = sqlText.ToString();
cmd.CommandTimeout = commandTimeout;
return (byte[])cmd.ExecuteScalar();
}
}

And the final trick - calling sp_unsetapprole. This gave me issues as it errored with a 15422 (Application roles can only be activated at the ad hoc level) on a CommandType.Text execution. It has to be a CommandType.StoredProcedure to work :

public void UnsetApprole(SqlConnection connection, byte[] approleCookie)
{
string sqlText = "exec sp_unsetapprole @cookie=@approleCookie";

if (connection.State.Equals(ConnectionState.Closed))
connection.Open();

using (SqlCommand cmd = connection.CreateCommand())
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "sp_unsetapprole";
cmd.Parameters.AddWithValue("@cookie", approleCookie);

cmd.CommandTimeout = commandTimeout;
cmd.ExecuteNonQuery();
}
}

Remember to add all your favourite try...catch blocks etc.

Tuesday 21 July 2009

Deadlock resolution

An excellent post here : A Deadlock Resoultion - on how to track and trace and solve a deadlock issue.

SQL Memory Usage - Further Research into the Procedure Cache

Following on from the last post on SQL Server Memory usage, I was particularly interested in the large use of the procedure cache by ad-hoc queries. In the example I have here, there is a large number of ad-hoc queries, and the cache is constantly growing.

This is a consistent behaviour, whenever a statement is executed, SQL Server will search through the cache to see if the statement has been used before. As this is an ad-hoc statement, it is unlikely to find one so will create a new plan and add it to the cache. These are not aged all that quickly, so the effect is that the cache keeps on growing, which in turn will slow down the process of locating a plan. This then means that SQL Server (if there is a demand on memory) may flush pages from data cache, which will also have an effect on the performance.

The preferred solution is to use parameterised stored procedures instead of ad-hoc queries, however if this is not possible (amending the application may be beyond your control) then flushing the stored procedure cache is the next best thing.

Ideally, we do not want to flush the whole cache, as genuine reuseable plans are a good thing. Clearing the ad-hoc plans is therefor preferred. This is not obvious from BOL, however I stumbled on the following statement on a google trawl :

DBCC FREESYSTEMCACHE('SQL Plans')

This frees just the ad-hoc plans - perfect!

It goes without saying that this needs to be tested, with proper performance figures to back it up. If you are short of memory, and/or are witnessing a slowdown through time on general SQL activity then this may be one of the solutions.

Wednesday 15 July 2009

SQL Memory Usage analysis

Analysing whether there is enough memory on a SQL Server is no easy task. Looking at task manager is of no real use, the only way to get meaningful figures is to use Performance. Here, knowing which metrics to record and how to interpret them is the complexity. This also needs to be combined with information that is provided by SQL Server.

This is not a comprehensive guide on what to do, see websites below for more detailed information. This is post is a starting point to hopefully get some figures up and running without too much investigation.

The main metrics to record are :

Memory: Available Bytes, Page Input/Sec, Pages/sec
Paging file: % Usage
Process: Page File Bytes Peak, Private Bytes, Working Set
SQLServer Memory Manager: Total Server Memory(KB)

Using these metrics, the page file can be calculated by taking the Page File Bytes Peak and * 0.7.
Page file usage can then be compared to the following thresholds:

Memory\\Available Bytes No less than 4 MB
Memory\\Pages Input/sec No more than 10 pages
Paging File\\% Usage No more than 70 percent
Paging File\\% Usage Peak No more than 70 percent

In addition, the working set can be compared to the maximum available bytes, and in turn the actual physical RAM on the system. If the physical RAM (after other processes are taken into account) is exceeded, then there is not enough memory.

If the working set is less that the Total Server Memory, then this is evidence that it is being trimmed.

To see if the swap file is being used, the following metrics need to be recorded:

Memory – Available Mbytes
Memory – Committed Bytes
Memory – Pages Output/Sec
Paging - %Usage
Working Set – Sqlservr

Of real interest, if the Memory Pages Output/Sec is not 0, then the swap file is being used.

To analyse what the memory usage is, the DBCC MemoryStatus can help to identify how efficiently the procedure cache is used. A large procedure cache may have an impact on SQL performance, and may result in less data being kept in memory force SQL to go to disk.

Useful sites for further information are :

(for 64bit SQL) - How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
Procedure Cache Tuning/Sizing
RAM, Virtual Memory, Pagefile and all that stuff

Tuesday 23 June 2009

Setting up DB Mail on SQL 2005

The following code will set up all the necessary parts for SQL 2005 DBMail. Note that you will need to also enable Database Mail in Surface Area Configuration for Features. Also the variables need to be set to suit the installation.

USE msdb

GO

DECLARE @ProfileName VARCHAR(255)
DECLARE @AccountName VARCHAR(255)
DECLARE @SMTPAddress VARCHAR(255)
DECLARE @EmailAddress VARCHAR(128)
DECLARE @DisplayUser VARCHAR(128)

SET @ProfileName = 'MyMailProfile';
SET @AccountName = 'MyMailAccount';
SET @SMTPAddress = 'my.smtp.server.address';
SET @EmailAddress = 'myemail@myorg.com';
SET @DisplayUser = 'My Real Name';

EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = @AccountName,
@email_address = @EmailAddress,
@display_name = @DisplayUser,
@mailserver_name = @SMTPAddress

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = @ProfileName

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = @ProfileName,
@account_name = @AccountName,
@sequence_number = 1 ;

To send an email :

EXEC msdb.dbo.sp_send_dbmail
@recipients =N'someone@someaddress.com',
@body = 'Test Email Body',
@subject = 'Test Email Subject',
@profile_name = 'MyMailProfile'

Thursday 11 June 2009

Search Cache Plans Stored Procedure

Excellent article in SQL Server Central which inspects cached plans looking for poor performance. Full credit to Ian Stirk (Ian_Stirk@yahoo.com) for a great, easy to use and very useful routine.

Code is as follows :


CREATE PROC [dbo].[dba_SearchCachedPlans]
@StringToSearchFor VARCHAR(255)
AS
/*----------------------------------------------------------------------
Purpose: Inspects cached plans for a given string.
------------------------------------------------------------------------

Parameters: @StringToSearchFor - string to search for e.g. '%missingindexes%'.

Revision History:
03/06/2008 Ian_Stirk@yahoo.com Initial version

Example Usage:
1. exec dbo.dba_SearchCachedPlans '%missingindexes%'
2. exec dbo.dba_SearchCachedPlans '%columnswithnostatistics%'
3. exec dbo.dba_SearchCachedPlans '%tablescan%'
4. exec dbo.dba_SearchCachedPlans '%CREATE PROC%MessageWrite%'

-----------------------------------------------------------------------*/
BEGIN
-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
st.text AS [SQL]
, cp.cacheobjtype
, cp.objtype
, DB_NAME(st.dbid)AS [DatabaseName]
, cp.usecounts AS [Plan usage]
, qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX))LIKE @StringToSearchFor
ORDER BY cp.usecounts DESC
END

Thursday 28 May 2009

Tuesday 26 May 2009

VS 2008 Excel Addin tab issue (ribbon designer) - incorrectly merging custom tab with 3rd party tab

The ribbon designer is a great tool, however it seems to have a serious flaw. The issue is that if you have already got a custom tab installed (e.g. the Team tab from Team Foundation Server), then any tab created using the designer is merged with the existing custom tab. Annoying to say the least.

The issue seems to be with the way that tab is identified to Office. The property group that defines this is ControlId, which is a little confusing. This property group contains two properties, ControlId and OfficeId.

ControlId is a drop down with 2 options : Office or Custom. This is where the confusion begins. As this is an Office addin, one would assume that the ControlId should be set to Office, and the OfficeId should be set to a distinct name. However, if you change the OfficeIdfrom TabAddIns , your custom ribbon will not appear. And if you leave it as TabAddIns, then your custom ribbon will be merged with any other custom addin that you happen to have installed, e.g. the Team Foundation "Team" menu.

In fact you must set ControlId to Custom. This in turn alters OfficeId to CustomId. You then enter your distinct name in the (Name) property. Yep this is a little strange ! This will automatically set the CustomId and therefore the tab's Id within Excel. And when it is installed, it will not merge with any existing tabs.

I think the reason for this can be explained if you export the ribbon to XML. You will see that the tab has a property of IdMso = . It seems that this can only ever be set TabAddIns. Any other setting does not work. However, if you have set the ControlId to Custom, then in the XML the tab now has a property of Id = , and this works whatever the Id is. This matches how the XML used to look under VS 2005.

I'll put this down as a strange and confusing "feature" ;)

Friday 22 May 2009

Deploying Office 2007 Addin with VS 2008

Following on from my post last year about deploying from VS 2005, I have since upgraded the product to VS2008 (nicer development environment for Office addins for a start!). Unfortunately, the installation project I created for VS 2005 does not seem to be working after it's been converted to 2008. As per usual, the error reported from Excel 2007 is best described as minimal, so I have decided to start this all again from scratch.

The MSDN pages have been updated for .NET 3.5 and can be found here.

I am currently working my way through this. First stop is that I have had to recreate the addin project itself from scratch, as this did not convert at all. I am also changing the name of the assembly, just in case some weird caspol issue is getting in the way.

Next stop is creating a new installation routine.

Following the instructions in the MSDN link above produced a perfectly working installation.

NOTE
There are considerable changes between the VS2005 and VS2008 installation projects for Office addins, guess I should have known! (it's not any easier though).

Thursday 14 May 2009

Bootstrapping GAC in ClickOnce Visual Studio 2005

One of the main shortcomings of ClickOnce is the inability to install GAC related items, and to execute addition installation requirements such as creating registry settings.
To accomplish this you must create a custom Bootstrap which is installed as part of the pre-requisites. This is easier than it sounds. In essence :

Create a project which includes all the references required.
Create an installation project that loads the references into the GAC (and does other pre-requisite actions)
Copy the msi to the boot strapper folder for Visual Studio (e.g. C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper\Packages or C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages)
Create a product.xml to identify the msi.

*Update - make sure you create a subfolder to hold the language specific EULA. Without this it will break!

Alter the Prerequisites in the click once publish tab to include this new project.

There are some nice articles on this, not least some example code from dasBlonde.

Also :

Installsite bunch of links
MSDN bootstrap manifest generator

Thursday 7 May 2009

CLR to create a directory / folder

The reason for this is the ridiculous security settings required to use xp_cmdshell - specifically having to use sp_xp_cmdshell_proxy_account . The issue is that this requires a domain login and password, which is problematical if your account passwords change regularly.

The way around this is to use a CLR to create the folder. The code for the CLR (assuming you know how to create a CLR project) is as follows :


public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]

public static void cproc_createfolders(string folderName)
{
//Check whether folder exists
if (!System.IO.Directory.Exists((folderName)))
{
System.IO.Directory.CreateDirectory(folderName);
}
}
};


Simple enough. Installing it isn't. As this is accessing external to SQL Server, you need to set up the necessary permissions. There are two ways of doing this, via a login or by setting the database to be trustworthy. I used the latter, as this is a database behind a firewall, internal use only so felt it was OK. If you do not do this, you will get an error along these lines :


Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.



So :

ALTER DATABASE MyDatabase SET TRUSTWORTHY ON

Next is to install the assembly. This needs to specify that it is for External Access:


CREATE ASSEMBLY CreateDIR
FROM 'C:\CLR\myCLR.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS;
GO


And finally a stored procedure to allow access from SQL :


CREATE PROCEDURE dbo.cproc_createfolder
(
@strPath Nvarchar(1024)
)
AS EXTERNAL NAME CreateDIR.StoredProcedures.cproc_createfolder



Access to this routine is via dbo.cproc_createfolder.

Took a lot of searching to find this, so I think it is only fair to credit Vadivel's Blog.

Monday 23 March 2009

Save/default parameters for SSRS

The following link is a rather useful way to save parameters and recall by user in SSRS.

Handling multi value parameters from SSRS in a stored procedure

Actually quite easy, using this function :

CREATE FUNCTION [dbo].[fn_SplitIN]
/* This function is used to split up multi-value parameters */
(
@ItemList VARCHAR(4000),
@delimiter CHAR(1)
)
/* ========================================================================
fn_SplitIN

Function to return a table for joining for a multi valued parameter from SSRS

How to use :

SELECT cols
FROM mytab
WHERE cols IN (SELECT Item FROM fn_SplitIN(@multiValueParam,','))

===========================================================================
Version Date Author Comment
------- ---- ------ -------
v0.00 23/03/2009 Brian Jones Initial Version
=========================================================================== */

RETURNS @IDTable TABLE (Item VARCHAR(50))
AS
BEGIN
DECLARE @tempItemList VARCHAR(4000)

SET @tempItemList = @ItemList

DECLARE @i INT
DECLARE @Item NVARCHAR(4000)

SET @tempItemList = REPLACE (@tempItemList, @delimiter + ' ',@delimiter)
SET @i = CHARINDEX(@delimiter, @tempItemList)

WHILE (LEN(@tempItemList) > 0)
BEGIN
IF @i = 0
SET @Item = @tempItemList
ELSE
SET @Item = LEFT(@tempItemList, @i - 1)

INSERT INTO @IDTable(Item) VALUES(@Item)

IF @i = 0
SET @tempItemList = ''
ELSE
SET @tempItemList = RIGHT(@tempItemList, LEN(@tempItemList) - @i)

SET @i = CHARINDEX(@delimiter, @tempItemList)
END

RETURN
END

Friday 20 March 2009

Conditional Where Clauses

Excellect article by Erland Sommarskog about the various ways of doing SQL conditional where clauses. What sets this apart is that Erland has comparative performance figures, showing what way is best. Also kept up to date.

Tuesday 27 January 2009

Change server collation

Changing SQL 2005 collation does not require a total re-install. Instead the following can be done :

1. Backup all user databases
2. Drop user databases
3. Make sure security is SQL Server
4. Rebuild the master database as follows :

cd "\Program Files\Microsoft SQL Server\90\Setup Bootstrap"

setup.exe /q /ACTION=RebuildDatabase /INSTANCENAME=MSSQLSERVER /SAPWD="sa-pwd" /SQLSYSADMINACCOUNTS="BUILTIN\ADMINISTRATORS" /SqlCollation=Latin1_General_CI_AS

More details are here

Wednesday 7 January 2009

TFS Sharing workspace between multiple users

This requirement is for a build machine where several developers (who develop with TFS on their own PC) want to build and test on the final hardware. IT would not allow a single AD login, and TFS will not allow a workspace to be used by different users on the same PC.

However, this can be done by fooling TFS into thinking that the physical workspace is a different path per user. By using the DOS command SUBST to assign a different drive letter for each user to the same path, TFS will allow the files to be shared.

This is OK for this scenario where only get latest was every used - I'm not sure if it would be any good if files are checked out.