Showing posts with label execute. Show all posts
Showing posts with label execute. Show all posts

Friday, March 30, 2012

Losing Temp Tables Between Tasks

I have a control flow setup with 5 tasks.

1. Create Temp Table (Execute SQL)
2. Load Temp Table (Execute SQL)
3. Data Flow Task using the temp table
4. Data Flow Task using the temp table
5. Drop Temp Table (Execute SQL)

Tasks 3 & 4 are currently setup to run parallel/concurrently - meaning they both branch directly off of task #2. They both also lead to task #5. Also, my connection manager is set to retain its connection.

Here's the problem. When executing this flow, task #3 will sometimes fail, other times it will succeed - without me changing anything between runs (I simply end debugging and run it again). When it does fail it is always because it cannot find the temp table. One thing to note is that task #4 (running at the same time as #3 and using the same temp table) always finishes first because it does quite a bit less work. Last thing to note: if I set up task 4 to run after 3 it works everytime, but of course it takes much longer for the total package to execute.

How is it that the task can find the temp table on some runs, but not on others? If this is just quirky behavior that I am going to have to live with for the time being, is there a way to force a task to re-run X number of times before giving up and moving on?

Thanks,
David Martin

Temp tables are only guaranteed to remain in existence as long as they are referenced. Is it possible that the connection is being dropped and reopened by SSIS for some reason between tasks?

Or that there is some time during the processing where Sql Server cannot detect the temp table is still being referrenced, so drops it even if the connection is still open?

Seems like using temp tables is risky if you don't have full and explicit control over their existence. I want to use them also but not if I cannot guarantee they will be there when I need them!

|||

Make sure the "RetainSameConnection" property is set to "True" for the connection. This will allow the Execute SQL tasks to share the connection so the temp table can be accessed.

Frank

sql

Losing config file between child packages and *$##@!- Passwords!

I have a parent package that uses Execute Package Task against three child packages. They are all setup to use the same configuration file, and the packages are all located in the same folder. Purpose of the configuation is to use the same connection strings for each child package.

I removed the path from the config. file specification to facilitate deployment. During debugging, the child packages cannot find the config file when they are run via the parent unless the full path is specified. If run standalone, a child package can find the config. file without the path. As I said, all dtsx filaes and the config file is in the same folder.

Also, the password-removal issue is killing me in development. Everytime I touch the config. file, it removes the password from the connection string. I have "protection level" set to "don't save sensitive" - I'd appreciate help with this too.

thanks

Mark Challen wrote:

I have a parent package that uses Execute Package Task against three child packages. They are all setup to use the same configuration file, and the packages are all located in the same folder. Purpose of the configuation is to use the same connection strings for each child package.

I removed the path from the config. file specification to facilitate deployment. During debugging, the child packages cannot find the config file when they are run via the parent unless the full path is specified. If run standalone, a child package can find the config. file without the path. As I said, all dtsx filaes and the config file is in the same folder.

Also, the password-removal issue is killing me in development. Everytime I touch the config. file, it removes the password from the connection string. I have "protection level" set to "don't save sensitive" - I'd appreciate help with this too.

thanks

When you have it set to "don't save sensitive" that's what's supposed to happen. It's not saving sensitive information.|||As far as your paths go, you have to use absolute paths. So, you might be better off using an environment variable or something to store the path to the config file.

Relative paths have been requested: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=253232|||

Phil Brammer wrote:


When you have it set to "don't save sensitive" that's what's supposed to happen. It's not saving sensitive information.

Do you happen to know which setting will allow the saving of a password so I can work on the packages in development without having to reset the password everytime I touch the configuration? I can understand removing the password when building the solution for deployment, having it auto-magically disappear while staying in the designer it makes no sense.

|||http://msdn2.microsoft.com/en-us/library/ms141747.aspx|||

Mark Challen wrote:

I removed the path from the config. file specification to facilitate deployment. During debugging, the child packages cannot find the config file when they are run via the parent unless the full path is specified. If run standalone, a child package can find the config. file without the path. As I said, all dtsx filaes and the config file is in the same folder.

In my current project I am using that technique; removing the path of the config file from the package configuration and it has worked fine in both ways when called from the aprent package or standalone....

Config file has to be in same folder than the apckages though.

|||

Thanks for the link Phil. Doesn't answer the question, and I'm getting increasingly frustrated by the lack of control with SSIS. Not to get off on a rant, but I personally would rather be responsible for setting security, not having MS dictate that I encrypt everything.

Anyhow, I have found a workaround - once the config. file is established on the parent I set it's "Read-only" property to true in the file system. A low-tech approach, but it seems to work.

The issue of the requirement for an absolute path for the config. file is going to be a showstopper for me (no way am I setting an environment variable; I don't see the thinking behind that - how do I establish the variable in the first place?). I will probably execute my child packages independently from the command line from my own custom wrapper code, passing in my connection strings on the command line. Too bad, I wanted to use the Configuration feature and I spent a lot of time trying to make it work. This seems to be endemic to SSIS (like checkpointing) - a cool feature at first glance, but try to use it and you hit a brick wall. </Rant>

Rafael Salas wrote:

In my current project I am using that technique; removing the path of the config file from the package configuration and it has worked fine in both ways when called from the aprent package or standalone....

Config file has to be in same folder than the apckages though.

Are you sure the package is successfully locating the config file and overriding the connection strings, or is it defaulting back to your original connection manager settings in the child?

|||

Mark Challen wrote:

The issue of the requirement for an absolute path for the config. file is going to be a showstopper for me (no way am I setting an environment variable; I don't see the thinking behind that - how do I establish the variable in the first place?).

You set up the environment variable by way of the system properties in Windows. Once it's set up, you simply reference it thereby making your configurations portable. This is a good way, in my opinion to handle changes across environments that have different configurations (file paths, etc...)

For me, I use SQL Server based configurations and environment variables. I've pretty much got everything setup so that I never have to make changes to a package when promoting it from development into QA and finally into production. Even though file paths may change (commonly C: drive to D: drive, perhaps), the packages continue to work with little interaction. This helps given that we can't promote to production -- another group does that. (Thank you Sarbanes-Oxley!)|||

Mark Challen wrote:

Are you sure the package is successfully locating the config file and overriding the connection strings, or is it defaulting back to your original connection manager settings in the child?

I am 100% sure that it works; actually it is very easy to see if it is not; each time a configuration does not take place; a warning is generated and can be seen in the progress tab of BIDS or in the logging table/file when logging is enable. I rather prefer using indirect configurations; but I have gotten a client where Env variable are not an option....

Monday, March 26, 2012

Looping through list in a query

Hello,
I am trying to write a query that will execute the same command to each
database in a list. More specifically I use the following command to get a
list of all the user-defined databases on the server:
SELECT catalog_name from information_schema.Schemata
WHERE NOT (CATALOG_NAME in
('tempdb','master','msdb','model','Northwind','pub s'))
and now I want to perfrom that same action (ie dettach or check for orphan
users etc) on each DB that I get from this query. Is there a way to do this
in SQL?
thanks
christos
Christos Kritikos wrote:
> Hello,
> I am trying to write a query that will execute the same command to
> each database in a list. More specifically I use the following
> command to get a list of all the user-defined databases on the server:
> SELECT catalog_name from information_schema.Schemata
> WHERE NOT (CATALOG_NAME in
> ('tempdb','master','msdb','model','Northwind','pub s'))
> and now I want to perfrom that same action (ie dettach or check for
> orphan users etc) on each DB that I get from this query. Is there a
> way to do this in SQL?
> thanks
> christos
Checking user information can be done in sysprocesses. No need to
enumerate the databases. If you really need a way to run the same
command against each database, you can use xp_MSForEachDB or just use a
temp table and interate through the results.
David Gugick
Imceda Software
www.imceda.com

Friday, March 23, 2012

loop through tables for UNION ALL

Hey folks,
I have a variable number of identically structured & similarly named
tables for which I would like to expeditiously execute a UNION ALL in
succession. e.g. "SELECT * from (x-2) UNION ALL SELECT * FROM (x-1) UNION
ALL SELECT * FROM x UNION ALL" etc... where (x-2), (x-1), x are three such
tables. I can pull all the tables I need to create this very large UNION ALL
query from INFORMATION_SCHEMA.TABLES, so is there any way to somehow
construct a looped stored procedure such that I don't have to manually type
in the full UNION ALL query?
Thanks,
Dave
--
______________________________
Remove "_SPAM" to reply directly.David Shorthouse wrote:
> Hey folks,
> I have a variable number of identically structured & similarly named
> tables
Why?

> for which I would like to expeditiously execute a UNION ALL in
> succession. e.g. "SELECT * from (x-2) UNION ALL SELECT * FROM (x-1) UNION
> ALL SELECT * FROM x UNION ALL" etc... where (x-2), (x-1), x are three such
> tables. I can pull all the tables I need to create this very large UNION A
LL
> query from INFORMATION_SCHEMA.TABLES, so is there any way to somehow
> construct a looped stored procedure such that I don't have to manually typ
e
> in the full UNION ALL query?
>
Create a UNION by querying the INFORMATION_SCHEMA and then cut and
paste the result. Put the UNION in a view and query that in your proc.
Take a look at the partitioned views topic in Books Online.
Better still, consider combining this into a single table unless it's a
case of "I'm not allowed to fix my database".
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||I'll take it as read that performance is not a problem here as if it were
the driving factor a DB redesign would not be out of the question.
Insead of trying to perform a union all, maybe a different appear would be
better...
1. Create a temporary table with the structure of your output.
2. create a cursor which will loop through the tablenames that you need to
query.
3. use the table name to create a dynamic sql statement (see sp_executesql)
that will populate the temporary table with the data that you need to
select.
4. after the loop, perform a single select on the temp table.
Be warned, the performance of this will be extremely bad.
Regards
Colin Dawson
www.cjdawson.com
"David Shorthouse" <davidshorthouse@.shaw_SPAM.ca> wrote in message
news:uX5XwbqPGHA.4900@.TK2MSFTNGP09.phx.gbl...
> Hey folks,
> I have a variable number of identically structured & similarly named
> tables for which I would like to expeditiously execute a UNION ALL in
> succession. e.g. "SELECT * from (x-2) UNION ALL SELECT * FROM (x-1) UNION
> ALL SELECT * FROM x UNION ALL" etc... where (x-2), (x-1), x are three such
> tables. I can pull all the tables I need to create this very large UNION
> ALL query from INFORMATION_SCHEMA.TABLES, so is there any way to somehow
> construct a looped stored procedure such that I don't have to manually
> type in the full UNION ALL query?
> Thanks,
> Dave
> --
> ______________________________
> Remove "_SPAM" to reply directly.
>|||
> Why?
This is simply the way the db has been designed. Individual clients access
individual tables.

> Create a UNION by querying the INFORMATION_SCHEMA and then cut and
> paste the result. Put the UNION in a view and query that in your proc.
> Take a look at the partitioned views topic in Books Online.
> Better still, consider combining this into a single table unless it's a
> case of "I'm not allowed to fix my database".
This is what I am attempting to do; scheduling a table creation by creating
a UNION ALL of these individual tables. Rather than having to manually add
these tables to the UNION ALL query prior to the creation of the one large
result, I am attempting to programmatically create this UNION ALL because
there are a variable number of individual tables. So, I'm not sure how the
idea you have above will accomplish this.
Dave|||
> Insead of trying to perform a union all, maybe a different appear would be
> better...
> 1. Create a temporary table with the structure of your output.
> 2. create a cursor which will loop through the tablenames that you need to
> query.
> 3. use the table name to create a dynamic sql statement (see
> sp_executesql) that will populate the temporary table with the data that
> you need to select.
> 4. after the loop, perform a single select on the temp table.
> Be warned, the performance of this will be extremely bad.
Colin,
Thanks for the ideas. Performance is not an issue here because I am indeed
creating a table from this large UNION ALL query result.|||>> This is simply the way the db has been designed. Individual clients acces
s individual tables. <<
Well, kill the moron who did this and fix it!! The name of this design
flaw is "attribute splitting" and it can take several forms.
1) Putting the same set of entities into many tables. There are two
sub-cases.
a.The absolute newbie usually mimics sequential numbered or
temporally labeled tapes in each table.
b. The newbie with a few ws experience splits the table on the
values of one or more attributes. For example, taking a Personnel
table and making it into "MalePersonnel" and "FemalePersonnel" tables.
2) Taking a single attribute and splitting in into multiple columns.
There are two sub-cases.
a. The multiple columns are in the same table. The most common
example is having a group of bit flag columns instead of a single
well-design encoding in a column. They have to be combined to get the
attribute's value and you need multi-column CHECK() constraints to
control the combinations.
b. The split column is in multiple tables. The most common example
is trying to force an OO design in an RDBMS, so that joins are required
to assemble a meaningful entity. . They have to be combined via a join
to get the attribute's value.|||As David has suggested, look at partitioned views. You can create a
single entity (a view) that contains all of the data from your tables.
Through the use of check constraints, the optimizer can do partition
exclusion, which means it will only select data from the appropriate
underlying tables. The VIEW is also able to be UPDATED, so you end up
dealing with a lot less mess than trying to dynamically select the
appropriate tables during your selection time.
Of course, the only real reason for using a partioned view is to help
performance when dealing with large datasets; in your case, it sounds
like you're having to do an end-run around a design choice that should
be reconsidered.
Stu

Wednesday, March 21, 2012

Loop through all User Tables

Hi,
I would like to execute a sql statement on all user tables of my db. Do you
know how to script that this statement loops through all user tables?
Thanks in advance
Graham SmithMay this *undocumented* proc will help
EXEC sp_MSforeachtable 'SELECT TOP 1 * FROM ?'
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"Graham Smith" <graham.smith@.bbank.com> wrote in message
news:uEp8hj33FHA.476@.TK2MSFTNGP15.phx.gbl...
> Hi,
> I would like to execute a sql statement on all user tables of my db. Do
> you know how to script that this statement loops through all user tables?
> Thanks in advance
> Graham Smith
>|||You can generate one this way:
SELECT 'SELECT TOP 1 * FROM ['
+ TABLE_SCHEMA + '].['
+ TABLE_NAME + ']'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'base table'
AND OBJECTPROPERTY(OBJECT_ID('['
+ TABLE_SCHEMA + '].['
+ TABLE_NAME + ']'), 'IsMsShipped') = 0
"Graham Smith" <graham.smith@.bbank.com> wrote in message
news:uEp8hj33FHA.476@.TK2MSFTNGP15.phx.gbl...
> Hi,
> I would like to execute a sql statement on all user tables of my db. Do
> you know how to script that this statement loops through all user tables?
> Thanks in advance
> Graham Smith
>

Wednesday, March 7, 2012

Looking For Technique To Default Execute Permission on any new SPr

Hi,
I'm looking for a way, for when any procedure is created or updated in a
user database, that a particular DB role be granted execute permissions on
the proc.
This role is a: datareader,datawriter, and ddladmin, but can not be
db_owner. I know that I can grant execute en-masse using a variety of T-SQL
techniques, but I would have to do this periodically. Also, adding a GRANT
on the tail-end of the CREATE PROCEDURE would not work for me.
I was beginning to create a trigger on sysobjects for CREATE,UPDATE when the
object type is 'P' (for procedure) to then issue the GRANT, but soon realize
d
that triggers on system tables are not permitted!
Does anyone have any suggestions?
Thanks very much,
SteveTIf you are on SQL Server 2005, you could grant execute
permissions on the schema to the role -
GRANT EXECUTE ON SCHEMA::YourSchema TO YourRole
Otherwise, not much you can do if you can't add a grant
statement to the create procedure script.
-Sue
On Thu, 2 Nov 2006 13:49:02 -0800, SteveT
<SteveT@.discussions.microsoft.com> wrote:

>Hi,
>I'm looking for a way, for when any procedure is created or updated in a
>user database, that a particular DB role be granted execute permissions on
>the proc.
>This role is a: datareader,datawriter, and ddladmin, but can not be
>db_owner. I know that I can grant execute en-masse using a variety of T-SQ
L
>techniques, but I would have to do this periodically. Also, adding a GRANT
>on the tail-end of the CREATE PROCEDURE would not work for me.
>I was beginning to create a trigger on sysobjects for CREATE,UPDATE when th
e
>object type is 'P' (for procedure) to then issue the GRANT, but soon realiz
ed
>that triggers on system tables are not permitted!
>Does anyone have any suggestions?