Friday, March 30, 2012
Loss of Column properties when exporting with DTS
Losing decimal places on Sql Svr to Sql Svr data flow
I have a staging table that has a float [DT_R8] column that shows 4 decimal places. When I use an OLE DB Source referencing that table to go to an OLE DB Destination referencing a table with an identical column the data gets rounded to a single decimal place. Needless to say this is really messing with the values.
I can see the Scale property in the Advanced Editor for OLE DB Destination but I cannot change it. Same for the OLE DB Source.
Oh, and if I do an insert using SQL in Management Studio I have no problem getting the 4 decimal places in. For example:
Insert into table2
(Col1, Col2)
select Col1, Col2 from table1
Moves all the data and keeps the 4 decimal places.
How do I do this without losing those decimals?
Thanks
John
Do you have the SSIS datatypes set to DT_R8? Double click on the green line coming out of the OLE DB source and in the metadata tab, what's the data type set to?
(Just asking the perhaps obvious question for completeness.)|||The datatype for that column is DT_R8 in the OLE DB Source (both External and Output columns), OLE DB Destination (both External and Input columns) and in the green line metadata.
John
|||Works fine for me. Maybe you've got some old metadata stuck somewhere. Try putting a Data Viewer on the pipeline at various points (if there are various points) to see what it looks like in flight. Maybe you need to delete and recreate some components or the whole data flow to get it straightened out. What method/query are you using to pull the data from the source? What type of connections are you using?
|||While recreating the data flow, I checked the view and it appears that the problem is happening just before the package executes.
Thanks for all the help.
Wednesday, March 28, 2012
looping through table tom exec an SP many times
Hi,
If i have an SP called mySP that accepts one parameter @.param
If I have a table of paramaters with only one column like this:
Param1
Param2
..
ParamN
How do I do if I want to execute the SP on all the table fields:
some thing like this:
Exec my SP 'Param1'
Exec mySP 'Param2'
...
Exec mySP 'ParamN'
I want that automatically since the parameters are going to be in a table called myTblParams
Notice that I don t want to pass all the parameters to the SP just once but only one value each time I execute the SP since mySP ccepts only one parameter.
Thanks a lot for guidelines
If I understand you correctly, you want to execute mySP for every row in the myTblParams table. To do this, you could write another sp that has a cursor to loop through the myTblParams rows.
...declare variables, etc...
Declare crsMyTblParams Cursor For
Select param_field_name
from myTblParams
Open crsMyTblParams
Fetch Next From crsMyTblParams Into @.Param
While @.@.Fetch_Status = 0 Begin
Exec mySP @.Param
Fetch Next From crsMyTblParams Into @.Param
End
thanks a lot man. looks more complicated than i thought. is that the simplest way to do it
thanks
|||This is actually very easy and something you do with stored procedures all the time. If you want to iterate through a result and do something for each row in the result, this is the way you do it (using a cursor).
You have another post that mentioned looping through all the database in an instance, and you would basically do the same thing. Capture the results of the exec sp_databases command (in a cursor) and loop through the results executing the select * from information_schema command for each database. You can prefix information_schema with the database name (i.e. select * from pubs.information_schema.tables. You can capture the database name from the cursor (results from the sp_database command) and then replace the database name in the information_schema select statement to find your keywords. In this case, it would be more complicated because you would have nested cursors. One that read your table (with words to search for) and one to loop through all the databases.
Cursors sound complicated when you first start using them (I know I was clueless), but once you do it a few times, they really are relatively simple and very powerful..
|||
u mean this one:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1048114&SiteID=1
cool u saw that:)
|||Can you explain the logic of your SP? Of course, the cursor logic is complicated, slow and hard to maintain. You can just write one SP that does all the operations in a set-based manner. It is hard to suggest the approach though without knowing more details. Please post some sample DDL, data (how the parameters look etc) and the expected results.|||
I am sure u ve already seen this SP. It searches if a given text as parameter occurs if any of the SPs scripts of the current database, and it outputs the occurence of that seaeched text and the line number. Here is the SP:
USE [mozaikDB]
GO
/****** Object: StoredProcedure [dbo].[sp_FindText] Script Date: 12/31/2006 16:54:35 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_FindText] @.text varchar(8000), @.findtype varchar(1)='P', @.DBvar varchar(30) AS
SET NOCOUNT ON
IF @.findtype='P'
SELECT DISTINCT Len(SubString(text,1, PatIndex('%' + @.text + '%', text)))-Len(Replace(SubString(text,1, PatIndex
('%' + @.text + '%', text)),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', text) AS Position,
OBJECT_NAME(id) AS ProcName
FROM DBvar.dbo.syscomments
WHERE text like '%' + @.text + '%'
ORDER BY ProcName, Line
So this scripts searches within the SPs of the current database but me I want to search withing all the instance databases and not only the current one.
Thanks.
|||Your SP logic is slightly flawed. If a SP definition spans multiple entries and a word gets split then your search will not find it. It is easier to use OBJECT_DEFINTION function and search the nvarchar(max) value directly. This might however be slower but better than a temporary table based approach for example (similar to sp_helptext). Lastly, the technique to find line number may not be accurate also given that lot of constructs are optional in TSQL (like statement separators, CRLFs etc). And if the text occurs multiple times then you will miss it also. So why don't you to just use the find mechanism in your source code control system instead rather than writing your own SP. The find mechanism in other tools are more robust and easier to use.|||I tried this code as per your suggestion but i didn t get the result, i got nothing returned:
declare crsMyTblParams2 cursor for
declare @.param as varchar(8000)
open crsMyTblParams2
select FileName from mozaikDB.dbo.filelist
fetch next from crsMyTblParams2 into @.param
while @.@.fetch_status=0 begin
exec dbo.sp_FindText @.param
fetch next from crsMyTblParams2 into @.param
end
When i try just:
exec dbo.sp_FindText 'isvirtual'
I get that the searched text: 'isvirtual' exists in one SP at line 10. But when i try the code above, and the table filelist has a row field containing the word: 'isvirtual'
I only get 8 rows with empyty values
any explanantion pls
Thanks
|||I am not sure what modification you made. Can you post a working sample of your code that demonstrates the issue? If you modified the SP to use OBJECT_DEFINITION then you can just do something like:
select o.name
from sys.objects as o
where o.type = 'P'
and OBJECT_DEFINITION(o.object_id) like '%your_text%'
The above query will return all SPs that contains 'your_text' at least once. And I am not sure what you are doing by passing filename to the SP. Does it use BULK INSERT to read the text from the file? And when I said that you should use the source code control system, I meant the tools that come with it also. For example, if you are using Visual SourceSafe then you can use the find option within the VSS IDE itself. Other source code control systems have similar functionality. You can also use Visual Studio IDE to search through the files.
Lastly, below is a SP that does what you are looking for but the logic uses older technique that will work in any version of SQL Server. You can simplify it in SQL Server 2005.
http://umachandar.com/technical/SQL6x70Scripts/UtilitySPs(6x)/Main11.htm
|||Sorry, i meant i used the code from DannoCoy mentioned above that uses the cursor to loop through the table where my keywords re located: param1, param2...
i don t use any file
filename is just the name of the table column so that I can get the searched keywords, then i call the SP that seraches the kayword using the cursor methid mentioned by DannoCoy
I haven t modified the code yet to use OBJECT_DEFINITION but i ll do that after everything works first.
Thanks
|||Here is the SP code I used:
USE [mozaikDB]
GO
/****** Object: StoredProcedure [dbo].[sp_FindText] Script Date: 12/31/2006 17:22:31 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_FindText] @.text varchar(8000), @.findtype varchar(1)='P'
AS
SET NOCOUNT ON
IF @.findtype='P'
SELECT DISTINCT Len(SubString(text,1, PatIndex('%' + @.text + '%', text)))-Len(Replace(SubString(text,1, PatIndex
('%' + @.text + '%', text)),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', text) AS Position,
OBJECT_NAME(id) AS ProcName
FROM dbo.syscomments
WHERE text like '%' + @.text + '%'
ORDER BY ProcName, Line
Then the table mozaikDB.dbo.filelist has the values: Param1, Param2,....isVirtual...,PramN
Then I used the script suggested by DannoCoy like this:
declare @.param as varchar(8000)
declare crsMyTblParams cursor for
select FileName from mozaikDB.dbo.filelist
open crsMyTblParams
fetch next from crsMyTblParams into @.param
while @.@.fetch_status=0 begin
exec dbo.sp_FindText @.param
fetch next from crsMyTblParams into @.param
end
The problem:
When I execute just the above SP like this;
exec dbo.sp_FindText 'isvirtual'
I get a result showing that the searched word 'isVirtual' is part of the SP CategorySave_sp in line 10
but when I run the script
declare crsMyTblParams cursor for
......etc
suggested by DannoCoy I get no result
Thanks
|||R.Tutus, not sure what you are doing, but I created the sp_findtext and a filelist table (and added some character strings to search for), ran the code for the cursor and it worked fine. Question: do you have any rows (strings to search for) in your table mozaikDB.dbo.filelist?
Now that I see what you are attempting to do, I totally agree with Umachandar. The logic is flawed. This is going to produce questionable results at best. I have my name in a proc three different times, and sp_findtext is only indicating the first occurance. It is also returning function names and views, (not just stored procedures).
PS. When you use a cursor, you should finish with
close cursorname
deallocate cursorname
|||
yeah guys u re so right (obviously:) ) .
any way in my seached field in the table filelist i had the word: 'isvirtual ' with many spaces since the field is nvarchar(50) instead of 'isvirtual'. that s why the cursor didn t find it.
when i redefined the column in filelist as varchar(9) instead of 50 the cursor worked perfectly.
Now I am gonna try what our friend Umachandar s suggested and i ll let u know if any difficulty.
Thanks for tutoring me guys:)
|||
1/
Now I replaced the code in my seacrh procedure findText mentioned above with this one suggested by:
Create PROCEDURE [dbo].[sp_FindText2] @.text varchar(8000), @.findtype varchar(1)='P'
AS
SET NOCOUNT ON
IF @.findtype='P'
select @.text, o.name AS ProcName,@.text ,Len(SubString(object_definition(o.object_id),1, PatIndex('%' + @.text + '%', object_definition(o.object_id))))-Len(Replace(SubString(object_definition(o.object_id),1, PatIndex
('%' + @.text + '%', object_definition(o.object_id))),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', object_definition(o.object_id)) AS Position
from sys.objects as o
where o.type='P' and object_definition(o.object_id) like '%' + @.text + '%'
ORDER BY ProcName, Line
It s working so far. I understand that this might not generate the desired results in case the searched keyword: @.text occurs many times in the same SP.
Is this fine at least in your opinion. Few more questions pls:
2/ Umachandar suggested: to use OBJECT_DEFINTION function and search the nvarchar(max)value directly. How that fit in my code above and what s the use of nvarchar(max) in this context
3/ also he suggested "You can just write one SP that does all the operations in a set-based manner" which is to use one logic in the original Findtext SP to both search the keyword and loop through the table filelist containing the list of keywords to be searched. He meant that there d be no need to use both one SP called FindText for the search and another SP using cursor to execute the first SP for al the the listFile rows. Would you please be able to suggest a way for doing that.
4/ One last thing, the code above searches only in the SPs of the current Database. I want to customize the code to look up the SPs in all the databases of the current instance and not only the current DB.
Thanks a lot
Thanks
looping through table to exec an SP many times
Hi,
If i have an SP called mySP that accepts one parameter @.param
If I have a table of paramaters with only one column like this:
Param1
Param2
..
ParamN
How do I do if I want to execute the SP on all the table fields:
some thing like this:
Exec my SP 'Param1'
Exec mySP 'Param2'
...
Exec mySP 'ParamN'
I want that automatically since the parameters are going to be in a table called myTblParams
Notice that I don t want to pass all the parameters to the SP just once but only one value each time I execute the SP since mySP ccepts only one parameter.
Thanks a lot for guidelines
If I understand you correctly, you want to execute mySP for every row in the myTblParams table. To do this, you could write another sp that has a cursor to loop through the myTblParams rows.
...declare variables, etc...
Declare crsMyTblParams Cursor For
Select param_field_name
from myTblParams
Open crsMyTblParams
Fetch Next From crsMyTblParams Into @.Param
While @.@.Fetch_Status = 0 Begin
Exec mySP @.Param
Fetch Next From crsMyTblParams Into @.Param
End
thanks a lot man. looks more complicated than i thought. is that the simplest way to do it
thanks
|||This is actually very easy and something you do with stored procedures all the time. If you want to iterate through a result and do something for each row in the result, this is the way you do it (using a cursor).
You have another post that mentioned looping through all the database in an instance, and you would basically do the same thing. Capture the results of the exec sp_databases command (in a cursor) and loop through the results executing the select * from information_schema command for each database. You can prefix information_schema with the database name (i.e. select * from pubs.information_schema.tables. You can capture the database name from the cursor (results from the sp_database command) and then replace the database name in the information_schema select statement to find your keywords. In this case, it would be more complicated because you would have nested cursors. One that read your table (with words to search for) and one to loop through all the databases.
Cursors sound complicated when you first start using them (I know I was clueless), but once you do it a few times, they really are relatively simple and very powerful..
|||
u mean this one:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1048114&SiteID=1
cool u saw that:)
|||Can you explain the logic of your SP? Of course, the cursor logic is complicated, slow and hard to maintain. You can just write one SP that does all the operations in a set-based manner. It is hard to suggest the approach though without knowing more details. Please post some sample DDL, data (how the parameters look etc) and the expected results.|||
I am sure u ve already seen this SP. It searches if a given text as parameter occurs if any of the SPs scripts of the current database, and it outputs the occurence of that seaeched text and the line number. Here is the SP:
USE [mozaikDB]
GO
/****** Object: StoredProcedure [dbo].[sp_FindText] Script Date: 12/31/2006 16:54:35 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_FindText] @.text varchar(8000), @.findtype varchar(1)='P', @.DBvar varchar(30) AS
SET NOCOUNT ON
IF @.findtype='P'
SELECT DISTINCT Len(SubString(text,1, PatIndex('%' + @.text + '%', text)))-Len(Replace(SubString(text,1, PatIndex
('%' + @.text + '%', text)),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', text) AS Position,
OBJECT_NAME(id) AS ProcName
FROM DBvar.dbo.syscomments
WHERE text like '%' + @.text + '%'
ORDER BY ProcName, Line
So this scripts searches within the SPs of the current database but me I want to search withing all the instance databases and not only the current one.
Thanks.
|||Your SP logic is slightly flawed. If a SP definition spans multiple entries and a word gets split then your search will not find it. It is easier to use OBJECT_DEFINTION function and search the nvarchar(max) value directly. This might however be slower but better than a temporary table based approach for example (similar to sp_helptext). Lastly, the technique to find line number may not be accurate also given that lot of constructs are optional in TSQL (like statement separators, CRLFs etc). And if the text occurs multiple times then you will miss it also. So why don't you to just use the find mechanism in your source code control system instead rather than writing your own SP. The find mechanism in other tools are more robust and easier to use.|||I tried this code as per your suggestion but i didn t get the result, i got nothing returned:
declare crsMyTblParams2 cursor for
declare @.param as varchar(8000)
open crsMyTblParams2
select FileName from mozaikDB.dbo.filelist
fetch next from crsMyTblParams2 into @.param
while @.@.fetch_status=0 begin
exec dbo.sp_FindText @.param
fetch next from crsMyTblParams2 into @.param
end
When i try just:
exec dbo.sp_FindText 'isvirtual'
I get that the searched text: 'isvirtual' exists in one SP at line 10. But when i try the code above, and the table filelist has a row field containing the word: 'isvirtual'
I only get 8 rows with empyty values
any explanantion pls
Thanks
|||I am not sure what modification you made. Can you post a working sample of your code that demonstrates the issue? If you modified the SP to use OBJECT_DEFINITION then you can just do something like:
select o.name
from sys.objects as o
where o.type = 'P'
and OBJECT_DEFINITION(o.object_id) like '%your_text%'
The above query will return all SPs that contains 'your_text' at least once. And I am not sure what you are doing by passing filename to the SP. Does it use BULK INSERT to read the text from the file? And when I said that you should use the source code control system, I meant the tools that come with it also. For example, if you are using Visual SourceSafe then you can use the find option within the VSS IDE itself. Other source code control systems have similar functionality. You can also use Visual Studio IDE to search through the files.
Lastly, below is a SP that does what you are looking for but the logic uses older technique that will work in any version of SQL Server. You can simplify it in SQL Server 2005.
http://umachandar.com/technical/SQL6x70Scripts/UtilitySPs(6x)/Main11.htm
|||Sorry, i meant i used the code from DannoCoy mentioned above that uses the cursor to loop through the table where my keywords re located: param1, param2...
i don t use any file
filename is just the name of the table column so that I can get the searched keywords, then i call the SP that seraches the kayword using the cursor methid mentioned by DannoCoy
I haven t modified the code yet to use OBJECT_DEFINITION but i ll do that after everything works first.
Thanks
|||Here is the SP code I used:
USE [mozaikDB]
GO
/****** Object: StoredProcedure [dbo].[sp_FindText] Script Date: 12/31/2006 17:22:31 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_FindText] @.text varchar(8000), @.findtype varchar(1)='P'
AS
SET NOCOUNT ON
IF @.findtype='P'
SELECT DISTINCT Len(SubString(text,1, PatIndex('%' + @.text + '%', text)))-Len(Replace(SubString(text,1, PatIndex
('%' + @.text + '%', text)),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', text) AS Position,
OBJECT_NAME(id) AS ProcName
FROM dbo.syscomments
WHERE text like '%' + @.text + '%'
ORDER BY ProcName, Line
Then the table mozaikDB.dbo.filelist has the values: Param1, Param2,....isVirtual...,PramN
Then I used the script suggested by DannoCoy like this:
declare @.param as varchar(8000)
declare crsMyTblParams cursor for
select FileName from mozaikDB.dbo.filelist
open crsMyTblParams
fetch next from crsMyTblParams into @.param
while @.@.fetch_status=0 begin
exec dbo.sp_FindText @.param
fetch next from crsMyTblParams into @.param
end
The problem:
When I execute just the above SP like this;
exec dbo.sp_FindText 'isvirtual'
I get a result showing that the searched word 'isVirtual' is part of the SP CategorySave_sp in line 10
but when I run the script
declare crsMyTblParams cursor for
......etc
suggested by DannoCoy I get no result
Thanks
|||R.Tutus, not sure what you are doing, but I created the sp_findtext and a filelist table (and added some character strings to search for), ran the code for the cursor and it worked fine. Question: do you have any rows (strings to search for) in your table mozaikDB.dbo.filelist?
Now that I see what you are attempting to do, I totally agree with Umachandar. The logic is flawed. This is going to produce questionable results at best. I have my name in a proc three different times, and sp_findtext is only indicating the first occurance. It is also returning function names and views, (not just stored procedures).
PS. When you use a cursor, you should finish with
close cursorname
deallocate cursorname
|||
yeah guys u re so right (obviously:) ) .
any way in my seached field in the table filelist i had the word: 'isvirtual ' with many spaces since the field is nvarchar(50) instead of 'isvirtual'. that s why the cursor didn t find it.
when i redefined the column in filelist as varchar(9) instead of 50 the cursor worked perfectly.
Now I am gonna try what our friend Umachandar s suggested and i ll let u know if any difficulty.
Thanks for tutoring me guys:)
|||
1/
Now I replaced the code in my seacrh procedure findText mentioned above with this one suggested by:
Create PROCEDURE [dbo].[sp_FindText2] @.text varchar(8000), @.findtype varchar(1)='P'
AS
SET NOCOUNT ON
IF @.findtype='P'
select @.text, o.name AS ProcName,@.text ,Len(SubString(object_definition(o.object_id),1, PatIndex('%' + @.text + '%', object_definition(o.object_id))))-Len(Replace(SubString(object_definition(o.object_id),1, PatIndex
('%' + @.text + '%', object_definition(o.object_id))),char(13),''))+1 AS Line,
PatIndex('%' + @.text + '%', object_definition(o.object_id)) AS Position
from sys.objects as o
where o.type='P' and object_definition(o.object_id) like '%' + @.text + '%'
ORDER BY ProcName, Line
It s working so far. I understand that this might not generate the desired results in case the searched keyword: @.text occurs many times in the same SP.
Is this fine at least in your opinion. Few more questions pls:
2/ Umachandar suggested: to use OBJECT_DEFINTION function and search the nvarchar(max)value directly. How that fit in my code above and what s the use of nvarchar(max) in this context
3/ also he suggested "You can just write one SP that does all the operations in a set-based manner" which is to use one logic in the original Findtext SP to both search the keyword and loop through the table filelist containing the list of keywords to be searched. He meant that there d be no need to use both one SP called FindText for the search and another SP using cursor to execute the first SP for al the the listFile rows. Would you please be able to suggest a way for doing that.
4/ One last thing, the code above searches only in the SPs of the current Database. I want to customize the code to look up the SPs in all the databases of the current instance and not only the current DB.
Thanks a lot
Thanks
Monday, March 26, 2012
looping through query result column and PRINT to log file....
Basically I need know how I would get the list, and I am using PRINT command... I need to somehow write a loop that works through the list and PRINTS to the file...
Thanks in advanceI got it :)
For any1 who might be interested heres how its done
DECLARE @.NbrList VarChar(300)
SELECT @.NbrList = COALESCE(@.NbrList + ', ', '') + CAST(Invoice AS varchar(30))
from TRANSACTIONS where ProductID = 'CVSDBN'
PRINT @.NbrList
Thanks ;)sql
Looping through Column to Insert
Attempting to insert a new Item for each Vendor into a Vendor Item Catalog. However, the Vendors are in no particular order.
ie... sample data
CompanyID,VendorID,ItemID
Comp1,1004,Event1
Comp1,3433,Event2
Comp1,45343,Event3
I need to be able to loop through to the next Vendor that doesnt already have this ItemID.
Any ideas?
Maybe something like this:
|||declare @.source table
( CompanyID varchar(8),
VendorID integer,
ItemID varchar(8)
)
insert into @.source
select 'Comp1', 1004, 'Event1' union all
select 'Comp1', 3433, 'Event2' union all
select 'Comp1',45343, 'Event3'declare @.target table
( CompanyID varchar(8),
VendorID integer,
ItemID varchar(8)
)
insert into @.target
select 'Comp1', 1004, 'Event1'insert into @.target
select CompanyID,
VendorID,
ItemID
from @.source a
where not exists
( select 0 from @.target b
where a.companyId = b.companyId
and a.vendorId = b.vendorId
and a.ItemId = b.ItemId
)select * from @.target
/*
CompanyID VendorID ItemID
-- --
Comp1 1004 Event1
Comp1 3433 Event2
Comp1 45343 Event3
*/
Are you trying to INSERT or UPDATE?
Adamus
|||I am attempting to INSERT. I tried using the suggested code WHERE NOT EXISTS but it returned nothing.
|||It would be good if you post the code you are executing with a description of the tables involved.|||CODE:
declare @.CompID as varchar(5),
@.VendID as varchar(15)
SET @.CompID = SELECT CompanyID FROM ARIVendorCatalog WHERE NOT EXISTS
(SELECT TOP 1 CompanyID FROM ARIVendorCatalog
WHERE VendorItemID = 'Events - Internal Meetings')
SET @.VendID = SELECT VendorID FROM ARIVendorCatalog WHERE NOT EXISTS
(SELECT TOP 1 VendorID FROM ARIVendorCatalog
WHERE VendorItemID = 'Events - Internal Meetings')
INSERT INTO ARIVendorCatalog
VALUES (@.CompID, @.VendID, 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '','','')
I Figure that once inserted, the TOP vendor will change every time until there is none left.
|||Ilana:
You might be able to accomplish the insert with something like this:
|||create table ARIVendorCatalog
( CompID varchar(5),
VendorID varchar(15),
VendorItemID varchar(30),
Column_04 varchar(30),
Column_05 varchar(10),
Column_06 varchar(10),
Column_07 varchar(10),
Column_08 varchar(10),
Column_09 varchar(10)
)
goinsert into ARIVendorCatalog
select 'Comp1', 'Vend1', 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '', '', '' union all
select 'Comp1', 'Vend2', 'Bricka Bracka Firecracker', 'Bugs Bunny, Bugs Bunny', '0', 'Z-US$', '', '', '' union all
select 'Comp2', 'Vend3', 'Help Richard Starr', 'Bingo Night', '0', 'Z-US$', '', '', ''insert into ARIVendorCatalog
select CompID,
VendorID,
'Events - Internal Meetings',
'Events - Internal Meetings',
'0', 'Z-US$', '', '', ''
from ( select CompID,
VendorID,
max ( case when VendorItemID = 'Events - Internal Meetings'
then 1 else 0 end
) as hasTargetEvent
from ARIVendorCatalog
group by CompID, VendorID
having max ( case when VendorItemID = 'Events - Internal Meetings'
then 1 else 0 end
) = 0
) sourceselect * from ARIVendorCatalog
/*
CompID VendorID VendorItemID Column_04 Column_05 Column_06 Column_07 Column_08 Column_09
- - - - -
Comp1 Vend1 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp1 Vend2 Bricka Bracka Firecracker Bugs Bunny, Bugs Bunny 0 Z-US$
Comp2 Vend3 Help Richard Starr Bingo Night 0 Z-US$
Comp1 Vend2 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend3 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
*/
Ilana:
There is a much better way of doing this if you happen to also have a "VENDOR" table that lists all unique active combinations of CompID and VendorID. If that is true you will not need to compute the aggregate in my last example and you might be able to do something like this:
create table ARIVendorCatalog
( CompID varchar(5),
VendorID varchar(15),
VendorItemID varchar(30),
Column_04 varchar(30),
Column_05 varchar(10),
Column_06 varchar(10),
Column_07 varchar(10),
Column_08 varchar(10),
Column_09 varchar(10)
)
goinsert into ARIVendorCatalog
select 'Comp1', 'Vend1', 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '', '', '' union all
select 'Comp1', 'Vend2', 'Bricka Bracka Firecracker', 'Bugs Bunny, Bugs Bunny', '0', 'Z-US$', '', '', '' union all
select 'Comp2', 'Vend3', 'Help Richard Starr', 'Bingo Night', '0', 'Z-US$', '', '', ''create table Vendor
( CompID varchar(5),
VendorID varchar(15),
vendorName varchar(20)
)
goinsert into Vendor
select 'Comp1', 'Vend1', 'Jaba Vendor' union all
select 'Comp1', 'Vend2', 'Blue Vendor' union all
select 'Comp2', 'Vend3', 'Gold Vendor' union all
select 'Comp2', 'Vend4', 'New Vendor'insert into ARIVendorCatalog
select CompID,
VendorID,
'Events - Internal Meetings',
'Events - Internal Meetings',
'0', 'Z-US$', '', '', ''
from Vendor a
where not exists
( select 0 from ARIVendorCatalog b
where a.CompID = b.CompID
and a.VendorID = b.VendorID
and VendorItemId = 'Events - Internal Meetings'
)select * from ARIVendorCatalog
/*
CompID VendorID VendorItemID Column_04 Column_05 Column_06 Column_07 Column_08 Column_09
- - - - -
Comp1 Vend1 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp1 Vend2 Bricka Bracka Firecracker Bugs Bunny, Bugs Bunny 0 Z-US$
Comp2 Vend3 Help Richard Starr Bingo Night 0 Z-US$
Comp1 Vend2 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend3 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend4 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
*/
Friday, March 23, 2012
Loop through table using column id instead of name
.
I would like to be able to use the column id instead of the column name to d
o
this. i would like to be able to pass a table name to the stored procedure
and not have to have the column names hard coded.
Is this even possible?
I know it is possible to do in C++ or VB, but can it be done as a stored
procedure.
Thanks for your help,
KenIt seems to me it's much more efficient to do it in the calling application.
String and file handling isn't T-SQL's strong suit.
Ken Holzer wrote:
> I need to generate a comma delimited file that is a copy of a row in a tab
le.
> I would like to be able to use the column id instead of the column name to
do
> this. i would like to be able to pass a table name to the stored procedure
> and not have to have the column names hard coded.
> Is this even possible?
> I know it is possible to do in C++ or VB, but can it be done as a stored
> procedure.
> Thanks for your help,
> Ken
Wednesday, March 21, 2012
Loop through each record and then each field within each record
I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor
AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row
Next
Case @.Chapter
Case 7
Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table
Also, if you think there is a better way to do this, let me know.
Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.
Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows
What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.
|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement
|||SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)
insert into t1 (f1, f2)
select f1, f2 -- any computations on the columns can be done here
from tbl
....
insert into t1 (f3, f4)
select f3, f4 -- any computations on the columns can be done here
from tbl
....
So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:
-- traditional SQL way
select f1
from tbl
...
union all
select f2
from tbl
....
-- another less obvious method
select case c.c when 1 then f1 when 2 then f2 end as f
from tbl
cross join (select 1 union all select 2) as c(c)
If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.
|||My insert will look lik this and all go into one table because that table will end up being the flat file I create
Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'
so for each column in the row, I have to insert it as a separate record into my final table.
Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:
1) receive txt comma delimited file from our vendor
2) Parse it out into an MS Access Table
3) Create an individual record for each column in each row and include the AccountID with it and some other static values
4) save it as a fixed length flat file
Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:
1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)
2) Use Conditional Split to determine which records to filter out
3) Use OLE DB Destination Editor to move in the records to a table
4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.
#4 is what I'm trying to do. I have to include the following fields for each record in my final table:
AccountID, 'a1', 'b1', ColumnName, ColumnValue
So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.
I hope this makes sense, it's not as confusing as it seems.
|||so expanding on my last post, this may give you a sense:
Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2
I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.
Loop through each record and then each field within each record
I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor
AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row
Next
Case @.Chapter
Case 7
Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table
Also, if you think there is a better way to do this, let me know.
Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.
Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows
What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.
|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement
|||SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)
insert into t1 (f1, f2)
select f1, f2 -- any computations on the columns can be done here
from tbl
....
insert into t1 (f3, f4)
select f3, f4 -- any computations on the columns can be done here
from tbl
....
So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:
-- traditional SQL way
select f1
from tbl
...
union all
select f2
from tbl
....
-- another less obvious method
select case c.c when 1 then f1 when 2 then f2 end as f
from tbl
cross join (select 1 union all select 2) as c(c)
If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.
|||My insert will look lik this and all go into one table because that table will end up being the flat file I create
Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'
so for each column in the row, I have to insert it as a separate record into my final table.
Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:
1) receive txt comma delimited file from our vendor
2) Parse it out into an MS Access Table
3) Create an individual record for each column in each row and include the AccountID with it and some other static values
4) save it as a fixed length flat file
Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:
1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)
2) Use Conditional Split to determine which records to filter out
3) Use OLE DB Destination Editor to move in the records to a table
4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.
#4 is what I'm trying to do. I have to include the following fields for each record in my final table:
AccountID, 'a1', 'b1', ColumnName, ColumnValue
So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.
I hope this makes sense, it's not as confusing as it seems.
|||so expanding on my last post, this may give you a sense:
Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2
I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.
loop through a tables columns
i want to build a long string with this type of logic
foreach (column in table.columns)
{
strValues += column.name + "=" + row.value
}
thank you for your helpAbraham,
Can you give me some sample data and desired results to work with?
Not exactly sure what you're asking for.
HTH
Jerry
"Abraham Andres Luna" <abe@.rdk.com> wrote in message
news:e8wfGkB0FHA.2460@.TK2MSFTNGP10.phx.gbl...
> is there a way using t-sql to loop through a tables columns?
> i want to build a long string with this type of logic
> foreach (column in table.columns)
> {
> strValues += column.name + "=" + row.value
> }
> thank you for your help
>|||What are you trying to accomplish? Is this supposed to be some kind of
data interchange format? If so, how about using SELECT ... FOR XML to
generate an XML fragment instead?
If you really want to do this dynamically then take a look at this proc
for inspiration:
http://vyaskn.tripod.com/code/generate_inserts.txt
David Portas
SQL Server MVP
--|||Abraham Andres Luna wrote:
> is there a way using t-sql to loop through a tables columns?
> i want to build a long string with this type of logic
> foreach (column in table.columns)
> {
> strValues += column.name + "=" + row.value
> }
>
No, you will need to use a client tool for this. The best you can do in
T-SQL is (air code):
select 'col1=' + cast(col1 as varchar) + ... + '; colN=' + colN from table
I know you want to avoid this exercise, but the only way would be to use a
client tool: for example: an ADO Recordset.
Oh, I suppose you could go to a great deal of trouble to create a dynamic
sql statement by looping through the result of querying the
information_schema.columns view, but I would not advise this.
Bob Barrows
Microsoft MVP -- ASP/ASP.NET
Please reply to the newsgroup. The email account listed in my From
header is my spam trap, so I don't check it very often. You will get a
quicker response by posting to the newsgroup.|||ty for the answer, that proc was a big help
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1129225553.008794.38510@.g14g2000cwa.googlegroups.com...
> What are you trying to accomplish? Is this supposed to be some kind of
> data interchange format? If so, how about using SELECT ... FOR XML to
> generate an XML fragment instead?
> If you really want to do this dynamically then take a look at this proc
> for inspiration:
> http://vyaskn.tripod.com/code/generate_inserts.txt
> --
> David Portas
> SQL Server MVP
> --
>
loooong columnnames
A long time ago I set up a database with *very* descriptive and therefore
long table names. :(
I did the same with the column names. :((
The column names even repeat the table name. :(((
This was done in a period in which I was still using a lot of dynamic sql
:(((( and sql in the code of the app :(((((.
The tables all have the prefix 'tbl' (yes, I know) :((((((
There are even ghastly underscores in it too :(((((((
As a result, I'm starting to develop a nasty case of carpal tunnel syndrom
(aka RSI) :((((((((
For example there are names like:
dbo.tbl_SalesOrderLine.SalesOrderLine_SalesOrder
I have to type them in QA a gazillion times a day
The number of objects in the database has grown well beyond even beginning
to think about renaming them all.
Now my question: Is there an MS SQL client utility out there that has
autocomplete?
Cheers,
Bas"Bas" <nomailplease> wrote in message news:<412e45ab$0$144$e4fe514c@.dreader9.news.xs4all.nl>...
> Hi,
> A long time ago I set up a database with *very* descriptive and therefore
> long table names. :(
> I did the same with the column names. :((
> The column names even repeat the table name. :(((
> This was done in a period in which I was still using a lot of dynamic sql
> :(((( and sql in the code of the app :(((((.
> The tables all have the prefix 'tbl' (yes, I know) :((((((
> There are even ghastly underscores in it too :(((((((
> As a result, I'm starting to develop a nasty case of carpal tunnel syndrom
> (aka RSI) :((((((((
> For example there are names like:
> dbo.tbl_SalesOrderLine.SalesOrderLine_SalesOrder
> I have to type them in QA a gazillion times a day
> The number of objects in the database has grown well beyond even beginning
> to think about renaming them all.
> Now my question: Is there an MS SQL client utility out there that has
> autocomplete?
> Cheers,
> Bas
There's a list of alternative client tools here:
http://www.aspfaq.com/show.asp?id=2442
And of course:
http://www.google.com/search?source...de+autocomplete
Simon
Monday, March 19, 2012
Lookup Transformation Join Column Types
We didn't include support for floating point joins, because of the inherent inability of computers to accurately compare two floating point numbers.
To make the comparisons consistent, we would need to introduce some sort of rounding and user specified precision. If we did not, you may might be now posting a question about why 1.75 is not equal to 1.75 J
I hardly every see people store invoice or account numbers as floating point, I would be interested to know why you made this choice.
|||>>inability of computers to accurately compare two floating point numbers
I had forgotten about that little fact.
I do not store account numbers and other similar keys as a floating point data type. However, the access database that I am importing with SSIS was created (by my predecessor) using double as the type for AccountNumber, rather than long integer. From here on out, I will do a data conversion before I do the look up and that should take care of my problem.
Thanks!
Lookup Transform error when linking using a DT_R8
I'm sure DT_R8 does work in a Lookup. You may get an error such as -
[DDD, BackOfficeSoftwareKey]
One or more columns do not have supported data types, or their data types do not match.
This means that the source data type and the reference data type do not match. All lookups, the mappings between source and reference columns, must compare data types that match exactly, so you cannot compare a DT_I4 with a DT_R8 for example. One of them needs to be converted to the same type as the other before the match. You may be able to do this in the source extraction, or in the lookup reference table specification, use a SQL query with a CAST, or finally use a Data Conversion Tramsform or Derived Column.
|||I'm pretty sure that's not the problem becuase I've tried that approach. I've also double-checked the data types in the advanced editor and the datatypes do match. They are both DT_R8. It's not the usual error message that you mentioned. It's a different one that says "input column <column_name> has a datatype which cannot be joined on."|||DT_R8 is not allowed as a join column. So are DT_R4 and BLOBs (DT_TEXT, DT_NTEXT, DT_IMAGE).Lookup Transform
I want to do something relatively simple with SSIS but can't find an easy way to do this (isint it always the case with SSIS )
I have a column lets say called iorg_id, and I want to lookup the matching rows for this col in a table.
In this table iorg_id may have several potential matching rows. In this table there is another col called 'Amount'.
I want to retrieve for each iorg_id the matching iorg_id in the other table but only the row with the largest value in the 'Amount' col.
I couldn't find a way to do this all in the Lookup Transform.
I can match the iorg_ids and retrieve the Amount column, but can't find a way just to retrieve the matching row with the largest value in the Amount col. The only way I can think to do this is then run the output from the Transform through an Aggregate function and determine the Max (although haven't tested this yet).
Seems strange to me in that the SQL in the Advanced tab gives me something like:
select * from
(select * from [dbo].[Table1]) as refTable
where [refTable].[iorg_id] = ?
where I believe the first 'select *' is retrieving all the cols that are listed in the LookupColumns list in the Columns tab.
I thought I would be able to amend this to something like:
select max(amount) from
(select * from [dbo].[Table1]) as refTable
where [refTable].[iorg_id] = ?
but I get a metadata type error.
So, questions are:
Is it possible to do this all in the Lookup Transform are do I have to use the Aggregate function as I think ?
Why is it not possible to amend the sql in the Advanced tab to manipulate the returned data ?
If I understand the problem correctly, couldn't you simply use (in the Lookup) some SQL similar to: -
SELECT iorg_id
, MAX(amounT)
FROM <YourTable>
GROUP BY iorg_id
A match on iorg_id should then give you what you want
|||Yes, Paul you're right. Thanks.
My colleague had actually pointed that out to me in the interim.
Sometimes it's difficult to see to the wood for the trees !
Monday, March 12, 2012
Lookup not detecting new column on table
It does show up in the preview.
Without deleting and recreating the component (with all the ensuing broken metadata fixes 'downstream' that this always entails), how can I get it to recognise the change?
In future I will do all LUs as a sql statement so at least I can control columns. This again defeats the purpose of the drag and drop environment..... If it were a harry potter character, it would be a dementor
Unfortunately, Lookup does not automatically updates the reference metadata, but you can do this using Advanced UI. Right click the component, select 'Show Advanced Editor'. Click Refresh button in the lower-left corner, click OK.
Now open regular Lookup Editor, you should see the new columns.
|||My workaround was to script out the table as a select and change the type to sql stmt.The column names were all the same so this didn't seem to break it.|||
Adolf,
More than a workaround; that is the best way of using a lookup transformation. There are several negative effects performance wise when you select the table from the list as opposed to provide a sql statement.
Lookup transformation by default uses RAM and you can reduce that impact by trimming the number of columns and rows that it uses (via query).
Lookup not detecting new column on table
It does show up in the preview.
Without deleting and recreating the component (with all the ensuing broken metadata fixes 'downstream' that this always entails), how can I get it to recognise the change?
In future I will do all LUs as a sql statement so at least I can control columns. This again defeats the purpose of the drag and drop environment..... If it were a harry potter character, it would be a dementor
Unfortunately, Lookup does not automatically updates the reference metadata, but you can do this using Advanced UI. Right click the component, select 'Show Advanced Editor'. Click Refresh button in the lower-left corner, click OK.
Now open regular Lookup Editor, you should see the new columns.
|||My workaround was to script out the table as a select and change the type to sql stmt.The column names were all the same so this didn't seem to break it.|||
Adolf,
More than a workaround; that is the best way of using a lookup transformation. There are several negative effects performance wise when you select the table from the list as opposed to provide a sql statement.
Lookup transformation by default uses RAM and you can reduce that impact by trimming the number of columns and rows that it uses (via query).
Lookup including looking up on null values possible?
The behavior that I desire from my lookup transformation is that for input records with a date the corresponding datekey from the datedimension is looked up and for records with date = null, the datekey for 'unknown date' is looked up.
The first part works well but the looking up on null fails, apparently because you can not say null == null. Does anyone know whether there is a setting in the lookup transformation to include null lookups?
Thnx,
HenkThe lookup transform can not do this. You would need to put a derived column in the flow and if the value is NULL then set it to the appropriate 'unknown date' value.
Thanks,|||Thanks Matt.|||In fact it can and it is quite easy! I found out in the documentation:
"A Lookup transformation that has been configured to use partial or no caching will fail if a lookup operation matches columns that contain null values, unless you manually update the SQL statement to include an OR ISNULL(ColumnName) condition. If full precaching is used, the lookup operation succeeds."
|||So by selecting the full precaching option for the lookup, you eliminate the need to modify the SQL with the ISNULL function?|||While this can work as described I would recommend against it and is, therefore, why I didn't mention it. You need to be careful if you do lookups in this way because unless you guarrantee that there is only one such value you will get the first one lookup happens to find with no warning.Full precaching will not work because the cache is fully charged and doesn't issue the SQL statement again. The reason why partial or no cache works is because the SQL statement is issued if a match isn't found and will return success due to the ISNULL statement as long as there is a NULL in the table.
There are too many ifs and caveats to make this a good solution, IMHO.
Thanks,
Lookup column / combining two datasets
database with employees details and that is dataset 1; but employees salaries
are stored in Excel spreadsheet and that is dataset 2.
The question is how can I display a list of employees along with their
slaries?
Thanks
JacekIf it were me, I'd work to get that Excel spreadsheet data converted
into a SQL table (I'd use DTS or SSIS to move the data). Once that's
done you can write a query that joins your detail table to your new
salary table. You would then use the results of that query for your
dataset.
HTH
toolman
Jacek wrote:
> I need to display a values from the another dataset. E.g. I have a SQL
> database with employees details and that is dataset 1; but employees salaries
> are stored in Excel spreadsheet and that is dataset 2.
> The question is how can I display a list of employees along with their
> slaries?
> Thanks
> Jacek|||And that's exactly what I had to do.
But in general: is there a way to join two datasets in SSRS? It does not
seem so ...
"toolman" wrote:
> If it were me, I'd work to get that Excel spreadsheet data converted
> into a SQL table (I'd use DTS or SSIS to move the data). Once that's
> done you can write a query that joins your detail table to your new
> salary table. You would then use the results of that query for your
> dataset.
> HTH
> toolman|||Not within a single data region like a table or matrix, etc. It's not
proven to be much of a problem for us because we always try to do as
much in SQL as we can before starting report development. Our ideal
situation is to create a single dataset with a SQL stored proc and then
simply drag and drop fields into a table.
Hang in there
toolman
Jacek wrote:
> And that's exactly what I had to do.
> But in general: is there a way to join two datasets in SSRS? It does not
> seem so ...
> "toolman" wrote:
> > If it were me, I'd work to get that Excel spreadsheet data converted
> > into a SQL table (I'd use DTS or SSIS to move the data). Once that's
> > done you can write a query that joins your detail table to your new
> > salary table. You would then use the results of that query for your
> > dataset.
> > HTH
> > toolman
Wednesday, March 7, 2012
Looking for the equivalence (date issue)
Hi all of you,
1)Ok, this works properly:
ISNULL([Column 13]) ? NULL(DT_WSTR,1) : RIGHT([Column 13],2) + "/" + SUBSTRING([Column 13],5,2) + "/" + SUBSTRING([Column 13],1,4)
for (old dts2000 column transformation)
Fecha=right(DTSSource("Col014"),2) & "-" & Mid(DTSSource("Col014"),5,2) & "-" & left(DTSSource("Col014"),4)
If IsNull(Fecha) then
DTSDestination("FechaAp") = Fecha
Else
DTSDestination("FechaAp")=null
End If
2). But how must I do such thing for IsDate when you have not available that function?
?
Fecha=right(DTSSource("Col014"),2) & "-" & Mid(DTSSource("Col014"),5,2) & "-" & left(DTSSource("Col014"),4)
If IsDate(Fecha) then
DTSDestination("FechaAp") = Fecha
Else
DTSDestination("FechaAp")=null
End If
Thanks a lot,
I think you'll need to use a script component to do this check. You can use the DateTime.TryParse method.
|||Thanks for that. It'll be tested.
Monday, February 20, 2012
looking for help to parse in t-sql
I have a column of varchar that has data of 4-5 I need to manipulate the
data in a query to get it into another table to where the value would be
0040005. How could I accomplish this by parsing out the -? Other data could
be 12-99, 1-234 which in turn should be 0120099, 0010234. Any help would be
appreciated. Thanks.
JakeHope this helps
CREATE TABLE #test
(col1 VARCHAR(5))
INSERT INTO #test VALUES('4-5')
INSERT INTO #test VALUES('12-99')
INSERT INTO #test VALUES('1-234')
SELECT
CASE WHEN CHARINDEX('-', REVERSE(col1)) = 2 THEN RIGHT('0000000'
+REPLACE(col1,'-', '000'),7)
WHEN CHARINDEX('-', REVERSE(col1)) = 3 THEN RIGHT('0000000' +
REPLACE(col1,'-', '00'),7)
WHEN CHARINDEX('-', REVERSE(col1)) = 4 THEN RIGHT('0000000' +
REPLACE(col1,'-', '0'),7)
END
FROM #test
Thanks
GYK
"Jake" wrote:
> Hello,
> I have a column of varchar that has data of 4-5 I need to manipulate t
he
> data in a query to get it into another table to where the value would be
> 0040005. How could I accomplish this by parsing out the -? Other data coul
d
> be 12-99, 1-234 which in turn should be 0120099, 0010234. Any help would b
e
> appreciated. Thanks.
> Jake
>
>|||GYK,
This is great thank you. If possible could you walk me through this so I
could learn how it worked? I got the charindex but and confused with teh
reverse and then right +replace idea. I truly appreciate it.
Jake
"GYK" <GYK@.discussions.microsoft.com> wrote in message
news:6DDBD65E-2D05-4560-9E27-9E2CE9488774@.microsoft.com...[vbcol=seagreen]
> Hope this helps
> CREATE TABLE #test
> (col1 VARCHAR(5))
> INSERT INTO #test VALUES('4-5')
> INSERT INTO #test VALUES('12-99')
> INSERT INTO #test VALUES('1-234')
>
> SELECT
> CASE WHEN CHARINDEX('-', REVERSE(col1)) = 2 THEN RIGHT('0000000'
> +REPLACE(col1,'-', '000'),7)
> WHEN CHARINDEX('-', REVERSE(col1)) = 3 THEN RIGHT('0000000' +
> REPLACE(col1,'-', '00'),7)
> WHEN CHARINDEX('-', REVERSE(col1)) = 4 THEN RIGHT('0000000' +
> REPLACE(col1,'-', '0'),7)
> END
> FROM #test
> Thanks
> GYK
> "Jake" wrote:
>|||Let us consider one statement as an example:
WHEN CHARINDEX('-', REVERSE(col1)) = 2 THEN RIGHT('0000000'
+REPLACE(col1,'-', '000'),7)
--> First goal is to find out at what position '-' is available. So I used
charindex
--> But the position of '-' is not determined from left of the given
characters. For ex: charindex position is 2 for both 1st and 3rd row. So I
did the reverse of those characters. So that charindex is calculated from th
e
right of the char.
--> Replace function just replaces '-' with the specified characters . In
this case its '000'
--> Next step is to pad zeros to the left of the char. And you want to
retain the existing characters. So after the length of 40005, zero's will be
padded till the length becomes 7. See function RIGHT in BOL
Then I used case function to determine the position of '-' and the number of
zeros it should be replaced with.
If you generally look at these kind of statements, it looks complex but if
you breakdown into indivdual sections (from the inner params), it is pretty
easy.
HTH
GYK
"Jake" wrote:
> GYK,
> This is great thank you. If possible could you walk me through this so
I
> could learn how it worked? I got the charindex but and confused with teh
> reverse and then right +replace idea. I truly appreciate it.
> Jake
>
> "GYK" <GYK@.discussions.microsoft.com> wrote in message
> news:6DDBD65E-2D05-4560-9E27-9E2CE9488774@.microsoft.com...
>
>|||GYK
Thanks for the break down I appreciate it.
Jake
"GYK" <GYK@.discussions.microsoft.com> wrote in message
news:F960FC9A-7F2A-44D1-911D-4AB5E016CC9C@.microsoft.com...[vbcol=seagreen]
> Let us consider one statement as an example:
> WHEN CHARINDEX('-', REVERSE(col1)) = 2 THEN RIGHT('0000000'
> +REPLACE(col1,'-', '000'),7)
> --> First goal is to find out at what position '-' is available. So I used
> charindex
> --> But the position of '-' is not determined from left of the given
> characters. For ex: charindex position is 2 for both 1st and 3rd row. So I
> did the reverse of those characters. So that charindex is calculated from
> the
> right of the char.
> --> Replace function just replaces '-' with the specified characters . In
> this case its '000'
> --> Next step is to pad zeros to the left of the char. And you want to
> retain the existing characters. So after the length of 40005, zero's will
> be
> padded till the length becomes 7. See function RIGHT in BOL
> Then I used case function to determine the position of '-' and the number
> of
> zeros it should be replaced with.
> If you generally look at these kind of statements, it looks complex but if
> you breakdown into indivdual sections (from the inner params), it is
> pretty
> easy.
> HTH
> GYK
>
> "Jake" wrote:
>|||CREATE TABLE #test
(col1 VARCHAR(5))
INSERT INTO #test VALUES('4-5')
INSERT INTO #test VALUES('12-99')
INSERT INTO #test VALUES('1-234')
SELECT
RIGHT('000'+SUBSTRING(col1,1,CHARINDEX('
-',col1)-1),4)+RIGHT('0000'+SUBSTRIN
G(col1,CHARINDEX('-',col1)+1,10),5)
FROM #test
Gary
"Jake" <rondican@.hotmail.com> wrote in message
news:%23wbOrnlzEHA.2716@.TK2MSFTNGP14.phx.gbl...
> Hello,
> I have a column of varchar that has data of 4-5 I need to manipulate
the
> data in a query to get it into another table to where the value would be
> 0040005. How could I accomplish this by parsing out the -? Other data
could
> be 12-99, 1-234 which in turn should be 0120099, 0010234. Any help would
be
> appreciated. Thanks.
> Jake
>