Showing posts with label insert. Show all posts
Showing posts with label insert. Show all posts

Friday, March 30, 2012

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last recor
d generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is ge
nerated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
Message posted via http://www.sqlmonster.com
Use the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.c om...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per minute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last record generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is generated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
--
Message posted via http://www.sqlmonster.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in a
n MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each
record contains a unique sequential number generated by the machine control
software. Data is logged using SQL INSERT scripts in the application (Wond
erware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprie
tary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operation
al fault, and stops. This brings up a window on the control screen that req
uires the Operator to manually enter data, and an UPDATE statement is execut
ed to modify the last recor
d generated. Occasionally when this update is processed, a single record wi
ll be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the u
pdate for one station was somehow locking an index in the table, we separate
d the tables so that each station has its own table. Since the station is s
topped, no new record is ge
nerated for that station until after the update is processed. The other sta
tions can still be running, so they are generating INSERT commands, which co
uld coincide with the UPDATE command. Both commands use the same connection,
which is always open.
We still occasionally lose ONE record in one or more of the other tables whe
n the UPDATE executes.
Any thoughts?
Message posted via http://www.droptable.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via droptable.com" <forum@.droptable.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQ
droptable.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.droptable.comsql

Wednesday, March 28, 2012

lopp for insert into

Hello,

I have 10 tables (T1Orj,T2Orj,…T10Orj) and I need to find modified rows from each table and insert them to T1Bak, T2Bak, …T10Bak. Although original and bak tables have the common fields , the original tables have more fields than bak tables, and T1Orj, T2Orj, … T10Orj tables have different table structures.

I can go head and write a insert into query for each table, I am just wondering Is there any way I can do this in a loop for all tables?

You can use a dataadapter to do this:DataSet ds =newDataSet();

SqlDataAdapter adp =newSqlDataAdapter("select * from authors; select * from sales","server=srv;Integrated Security=true;database=pubs");

adp.Fill(ds);

In this case the dataset ends up with two tables,
one for authors and one for sales.

If you want you can then go and rename them, as they end up with names like "Table", "Table1", "Table2", etc.

but they come out in the order you put in ....

then you can do

Monday, March 26, 2012

Looping through Column to Insert

Attempting to insert a new Item for each Vendor into a Vendor Item Catalog. However, the Vendors are in no particular order.

ie... sample data

CompanyID,VendorID,ItemID

Comp1,1004,Event1

Comp1,3433,Event2

Comp1,45343,Event3

I need to be able to loop through to the next Vendor that doesnt already have this ItemID.

Any ideas?

Maybe something like this:

declare @.source table
( CompanyID varchar(8),
VendorID integer,
ItemID varchar(8)
)
insert into @.source
select 'Comp1', 1004, 'Event1' union all
select 'Comp1', 3433, 'Event2' union all
select 'Comp1',45343, 'Event3'

declare @.target table
( CompanyID varchar(8),
VendorID integer,
ItemID varchar(8)
)
insert into @.target
select 'Comp1', 1004, 'Event1'

insert into @.target


select CompanyID,
VendorID,
ItemID
from @.source a
where not exists
( select 0 from @.target b
where a.companyId = b.companyId
and a.vendorId = b.vendorId
and a.ItemId = b.ItemId
)

select * from @.target
/*
CompanyID VendorID ItemID
-- --
Comp1 1004 Event1
Comp1 3433 Event2
Comp1 45343 Event3
*/

|||

Are you trying to INSERT or UPDATE?

Adamus

|||

I am attempting to INSERT. I tried using the suggested code WHERE NOT EXISTS but it returned nothing.

|||It would be good if you post the code you are executing with a description of the tables involved.|||

CODE:

declare @.CompID as varchar(5),

@.VendID as varchar(15)

SET @.CompID = SELECT CompanyID FROM ARIVendorCatalog WHERE NOT EXISTS

(SELECT TOP 1 CompanyID FROM ARIVendorCatalog

WHERE VendorItemID = 'Events - Internal Meetings')

SET @.VendID = SELECT VendorID FROM ARIVendorCatalog WHERE NOT EXISTS

(SELECT TOP 1 VendorID FROM ARIVendorCatalog

WHERE VendorItemID = 'Events - Internal Meetings')

INSERT INTO ARIVendorCatalog

VALUES (@.CompID, @.VendID, 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '','','')

I Figure that once inserted, the TOP vendor will change every time until there is none left.

|||

Ilana:

You might be able to accomplish the insert with something like this:

create table ARIVendorCatalog
( CompID varchar(5),
VendorID varchar(15),
VendorItemID varchar(30),
Column_04 varchar(30),
Column_05 varchar(10),
Column_06 varchar(10),
Column_07 varchar(10),
Column_08 varchar(10),
Column_09 varchar(10)
)
go

insert into ARIVendorCatalog
select 'Comp1', 'Vend1', 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '', '', '' union all
select 'Comp1', 'Vend2', 'Bricka Bracka Firecracker', 'Bugs Bunny, Bugs Bunny', '0', 'Z-US$', '', '', '' union all
select 'Comp2', 'Vend3', 'Help Richard Starr', 'Bingo Night', '0', 'Z-US$', '', '', ''

insert into ARIVendorCatalog
select CompID,
VendorID,
'Events - Internal Meetings',
'Events - Internal Meetings',
'0', 'Z-US$', '', '', ''
from ( select CompID,
VendorID,
max ( case when VendorItemID = 'Events - Internal Meetings'
then 1 else 0 end
) as hasTargetEvent
from ARIVendorCatalog
group by CompID, VendorID
having max ( case when VendorItemID = 'Events - Internal Meetings'
then 1 else 0 end
) = 0
) source

select * from ARIVendorCatalog

/*
CompID VendorID VendorItemID Column_04 Column_05 Column_06 Column_07 Column_08 Column_09
- - - - -
Comp1 Vend1 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp1 Vend2 Bricka Bracka Firecracker Bugs Bunny, Bugs Bunny 0 Z-US$
Comp2 Vend3 Help Richard Starr Bingo Night 0 Z-US$
Comp1 Vend2 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend3 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
*/

|||

Ilana:

There is a much better way of doing this if you happen to also have a "VENDOR" table that lists all unique active combinations of CompID and VendorID. If that is true you will not need to compute the aggregate in my last example and you might be able to do something like this:

create table ARIVendorCatalog
( CompID varchar(5),
VendorID varchar(15),
VendorItemID varchar(30),
Column_04 varchar(30),
Column_05 varchar(10),
Column_06 varchar(10),
Column_07 varchar(10),
Column_08 varchar(10),
Column_09 varchar(10)
)
go

insert into ARIVendorCatalog
select 'Comp1', 'Vend1', 'Events - Internal Meetings', 'Events - Internal Meetings', '0', 'Z-US$', '', '', '' union all
select 'Comp1', 'Vend2', 'Bricka Bracka Firecracker', 'Bugs Bunny, Bugs Bunny', '0', 'Z-US$', '', '', '' union all
select 'Comp2', 'Vend3', 'Help Richard Starr', 'Bingo Night', '0', 'Z-US$', '', '', ''

create table Vendor
( CompID varchar(5),
VendorID varchar(15),
vendorName varchar(20)
)
go

insert into Vendor
select 'Comp1', 'Vend1', 'Jaba Vendor' union all
select 'Comp1', 'Vend2', 'Blue Vendor' union all
select 'Comp2', 'Vend3', 'Gold Vendor' union all
select 'Comp2', 'Vend4', 'New Vendor'

insert into ARIVendorCatalog
select CompID,
VendorID,
'Events - Internal Meetings',
'Events - Internal Meetings',
'0', 'Z-US$', '', '', ''
from Vendor a
where not exists
( select 0 from ARIVendorCatalog b
where a.CompID = b.CompID
and a.VendorID = b.VendorID
and VendorItemId = 'Events - Internal Meetings'
)

select * from ARIVendorCatalog

/*
CompID VendorID VendorItemID Column_04 Column_05 Column_06 Column_07 Column_08 Column_09
- - - - -
Comp1 Vend1 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp1 Vend2 Bricka Bracka Firecracker Bugs Bunny, Bugs Bunny 0 Z-US$
Comp2 Vend3 Help Richard Starr Bingo Night 0 Z-US$
Comp1 Vend2 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend3 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
Comp2 Vend4 Events - Internal Meetings Events - Internal Meetings 0 Z-US$
*/

Looping Question!!..........

Hi all,
I have an initial parameter = 'TST0001'
I want to write an INSERT statement to automatically take the initial
parameter 'TST0001' and keep adding 1 to it until it get to 'TST9999'.
Now, my table should store data like these:
TST0001
TST0002
...
...
TST0010
TST0011
...
...
TST9999
Thanks,
Tom dYou can avoid looping by using a Numbers table for this sort of thing. See
the following articles:
http://www.bizdatasolutions.co_m/tsql/tblnumbers.asp
http://www.aspfaq.com/show.asp?id=2516
Here's an example:
CREATE TABLE foo (x VARCHAR(10) PRIMARY KEY)
INSERT INTO foo (x)
SELECT 'TST'+
RIGHT('0000'+CAST(N1.number*100+N2.number AS VARCHAR(4)),4)
FROM master.dbo.spt_values AS N1,
master.dbo.spt_values AS N2
WHERE N1.type = 'P'
AND N1.number BETWEEN 0 AND 99
AND N2.type = 'P'
AND N2.number BETWEEN 0 AND 99
I don't recommend you use this in any persistent code because spt_values
isn't documented. This is just to demonstrate what you can do with auxiliary
tables.
David Portas
SQL Server MVP
--

Friday, March 23, 2012

Loopback Server Policy

Hello all,

Recently, we ran into the issue that you can't do an insert into..exec statement on a loopback linked server that was previously commented on in:

http://www.dbnewsgroups.net/link.aspx?url=http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=124137&SiteID=1

So, for example, if you have a linkedserver to a database that happens to be on the same server as the querying thread, it fails with the message 'context in use'.

The answer from the previous thread was, don't use linked servers when the database is on the same server.

However,

The enviornment in our production system is fairly dynamic -- operations can be expected to move databases around in response to load balancing issues. We were counting on linked servers to make certain (non-performance sensitive) queries without regard to where a given database was located. Accepting that we have to make an exception case where the database lives on the same server means we'll have to have two sets of queries for every case this happens.

Something like

(pseudocode)

If server of linkedserver <> @.@.server

Insert into table....

Exec linkedserver.database.dbo.sproc

Else

Insert Into Table

exec database.dbo.sproc

(end pseudocode)

This seems pretty kludgy to me -- any suggestions on how to better manage this situation?

Thanks in advance

use cluster services instead of linked server

Wednesday, March 21, 2012

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.

Something like this maybe using a cursor or something else:

For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next

So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS

DECLARE Create_Final_Table CURSOR FOR

SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1

OPEN Create_Final_Table

FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2

WHILE @.@.FETCH_STATUS = 0
BEGIN

@.Chapter = chapter for this record

For each column in current record <-- not sure how to code this part is what I'm referring to

do some stuff here using sql for the column I'm on for this row

Next

Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record

Case 13

Insert Record
Insert Record
Insert Record

Case 11

Insert Record
Insert Record
Insert Record

Case 12

Insert Record
Insert Record
Insert Record

END

close Create_Final_Table
deallocate Create_Final_TableI need to essentially do 2 loops.Light fuse...One loops through each record and then inside each record row...stand back...I want to perform an insert on each column...cover ears...DECLARE Create_Final_Table CURSOR FOR

SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1

OPEN Create_Final_Table

FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2

WHILE @.@.FETCH_STATUS = 0
BEGIN

@.Chapter = chapter for this record

For each column in current record <-- not sure how to code this part is what I'm referring to

do some stuff here using sql for the column I'm on for this row

Next

Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record

Case 13

Insert Record
Insert Record
Insert Record

Case 11

Insert Record
Insert Record
Insert Record

Case 12

Insert Record
Insert Record
Insert Record

END

close Create_Final_Table
deallocate Create_Final_Table
KABOOM!!!!!!!!!!

Why are you doing this?|||Are you trying to normalize this beast? If so, I'd do one insert operation per column in the original table. Fast, easy, clear, simple... What's not to like?

-PatP

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row


Next
Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table

Also, if you think there is a better way to do this, let me know.

Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||

I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.

Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows

What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.

|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||

I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement

|||

SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)

insert into t1 (f1, f2)

select f1, f2 -- any computations on the columns can be done here

from tbl

....

insert into t1 (f3, f4)

select f3, f4 -- any computations on the columns can be done here

from tbl

....

So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:

-- traditional SQL way

select f1

from tbl

...

union all

select f2

from tbl

....

-- another less obvious method

select case c.c when 1 then f1 when 2 then f2 end as f

from tbl

cross join (select 1 union all select 2) as c(c)

If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.

|||

My insert will look lik this and all go into one table because that table will end up being the flat file I create

Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'

so for each column in the row, I have to insert it as a separate record into my final table.

Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:

1) receive txt comma delimited file from our vendor

2) Parse it out into an MS Access Table

3) Create an individual record for each column in each row and include the AccountID with it and some other static values

4) save it as a fixed length flat file

Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:

1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)

2) Use Conditional Split to determine which records to filter out

3) Use OLE DB Destination Editor to move in the records to a table

4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.

#4 is what I'm trying to do. I have to include the following fields for each record in my final table:

AccountID, 'a1', 'b1', ColumnName, ColumnValue

So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.

I hope this makes sense, it's not as confusing as it seems.

|||

so expanding on my last post, this may give you a sense:

Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2

|||If you are using SSIS then there is really no reason to denormalize the data in SQL Server. You can just do it in SSIS. Look at the foreach loop container in SSIS. This should allow you to loop through each column. If you have more questions about SSIS please post in the SQL Server Integration Services forum.|||ok, so then if I use the for each, how do I add my sql statement and have it refer to each column for the row I'm on?|||No. You get the data from the table as is and then perform the transformation on the client side. This is easier to do. For example, if you get a datareader for the results then you can use the columns collection with foreach container and loop through each column. If you post the question in the SSIS forum you will get more solutions.|||thanks so much|||

I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row


Next
Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table

Also, if you think there is a better way to do this, let me know.

Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||

I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.

Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows

What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.

|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||

I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement

|||

SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)

insert into t1 (f1, f2)

select f1, f2 -- any computations on the columns can be done here

from tbl

....

insert into t1 (f3, f4)

select f3, f4 -- any computations on the columns can be done here

from tbl

....

So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:

-- traditional SQL way

select f1

from tbl

...

union all

select f2

from tbl

....

-- another less obvious method

select case c.c when 1 then f1 when 2 then f2 end as f

from tbl

cross join (select 1 union all select 2) as c(c)

If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.

|||

My insert will look lik this and all go into one table because that table will end up being the flat file I create

Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'

so for each column in the row, I have to insert it as a separate record into my final table.

Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:

1) receive txt comma delimited file from our vendor

2) Parse it out into an MS Access Table

3) Create an individual record for each column in each row and include the AccountID with it and some other static values

4) save it as a fixed length flat file

Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:

1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)

2) Use Conditional Split to determine which records to filter out

3) Use OLE DB Destination Editor to move in the records to a table

4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.

#4 is what I'm trying to do. I have to include the following fields for each record in my final table:

AccountID, 'a1', 'b1', ColumnName, ColumnValue

So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.

I hope this makes sense, it's not as confusing as it seems.

|||

so expanding on my last post, this may give you a sense:

Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2

|||If you are using SSIS then there is really no reason to denormalize the data in SQL Server. You can just do it in SSIS. Look at the foreach loop container in SSIS. This should allow you to loop through each column. If you have more questions about SSIS please post in the SQL Server Integration Services forum.|||ok, so then if I use the for each, how do I add my sql statement and have it refer to each column for the row I'm on?|||No. You get the data from the table as is and then perform the transformation on the client side. This is easier to do. For example, if you get a datareader for the results then you can use the columns collection with foreach container and loop through each column. If you post the question in the SSIS forum you will get more solutions.|||thanks so much|||

I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.

Loop through a recordset to populate columns in a temp table

I want to take the contents from a table of appointments and insert the
appointments for any given month into a temp table where all the
appointments for each day are inserted into a single row with a column
for each day of the month.
Is there a simple way to do this?

I have a recordset that looks like:

SELECT
a.Date,
a.Client --contents: Joe, Frank, Fred, Pete, Oscar
FROM
dbo.tblAppointments a
WHERE
a.date between ...(first and last day of the selected month)

What I want to do is to create a temp table that has 31 columns
to hold appointments and insert into each column any appointments for
the date...

CREATE TABLE #Appointments (id int identity, Day1 nvarchar(500), Day2
nvarchar(500), Day3 nvarchar(500), etc...)

Then loop through the recordset above to insert into Day1, Day 2, Day3,
etc. all the appointments for that day, with multiple appointments
separated by a comma.

INSERT INTO
#Appointments(Day1)
SELECT
a.Client
FROM
dbo.tblAppointments a
WHERE
a.date = (...first day of the month)

(LOOP to Day31)

The results would look like
Day1 Day2 Day3 ...
Row1 Joe, Pete
Frank,
Fred

Maybe there's an even better way to handle this sort of situation?
Thanks,
lqYou can use a query to do reports like this without looping or temp tables.
As specified it seems a little strange to list the dates horizontally since
there is apparently no data on the vertical axis but the rest is really just
a matter of formatting once you have the basic query. Formatting is best
done client-side rather than in the database.

DECLARE @.dt DATETIME
/* First date of the month */
SET @.dt = '20050501'

SELECT
MAX(CASE WHEN DATEDIFF(DAY,@.dt,date)=0 THEN client END),
MAX(CASE WHEN DATEDIFF(DAY,@.dt,date)=1 THEN client END),
MAX(CASE WHEN DATEDIFF(DAY,@.dt,date)=2 THEN client END),
...
MAX(CASE WHEN DATEDIFF(DAY,@.dt,date)=30 THEN client END)
FROM tblAppointments
WHERE date >= @.dt
AND date < DATEADD(MONTH,1,@.dt)
GROUP BY client

--
David Portas
SQL Server MVP
--|||laurenq uantrell wrote:
> I want to take the contents from a table of appointments and insert the
> appointments for any given month into a temp table where all the
> appointments for each day are inserted into a single row with a column
> for each day of the month.
> Is there a simple way to do this?
> I have a recordset that looks like:
> SELECT
> a.Date,
> a.Client --contents: Joe, Frank, Fred, Pete, Oscar
> FROM
> dbo.tblAppointments a
> WHERE
> a.date between ...(first and last day of the selected month)
> What I want to do is to create a temp table that has 31 columns
> to hold appointments and insert into each column any appointments for
> the date...
> CREATE TABLE #Appointments (id int identity, Day1 nvarchar(500), Day2
> nvarchar(500), Day3 nvarchar(500), etc...)
> Then loop through the recordset above to insert into Day1, Day 2, Day3,
> etc. all the appointments for that day, with multiple appointments
> separated by a comma.
> INSERT INTO
> #Appointments(Day1)
> SELECT
> a.Client
> FROM
> dbo.tblAppointments a
> WHERE
> a.date = (...first day of the month)
> (LOOP to Day31)
>
> The results would look like
> Day1 Day2 Day3 ...
> Row1 Joe, Pete
> Frank,
> Fred
> Maybe there's an even better way to handle this sort of situation?
> Thanks,
> lq

You're talking about crosstab queries. Here's a page of links that may
be of use:

http://www.google.com/custom?q=cros...ID%3A1%3B&hl=en

--
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> I want to take the contents from a table of appointments and insert the
> appointments for any given month into a temp table where all the
> appointments for each day are inserted into a single row with a column
> for each day of the month.
> Is there a simple way to do this?
>...
> Then loop through the recordset above to insert into Day1, Day 2, Day3,
> etc. all the appointments for that day, with multiple appointments
> separated by a comma.

I'm a afraid that loop is what you will have to do. And write 31
UPDATE statements, one for each day of the month. There are ways to
build comma-separated lists with set-based statements, but the
methods used are unsupported and undefined, and cannot be trusted.
So the only way to build a CSV is to run a cursor.

OK, you don't really need 31 UPDATE statements. You could aggregate
data into a table with one per date, and then at then end run a
31-way self cross-join to produce the final result.

Certainly, a client program is much better apt to do this sort
of thing.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||David, thanks for that. I thought it must be pretty straignt forward.
The reason I want it all in one row and the days data in columns is
that I want to populate a sincle record on a form that resembles a
monthly calendar but cwhich in fact is simply 31 text boxes. Previously
I had a form that populated 31 different subforms by looping through
the dates on the client side and it is too slow.THanks,
lq|||David, thanks for that. I thought it must be pretty straight forward.
The reason I want it all in one row and the days data in columns is
that I want to populate a sincle record on a form that resembles a
monthly calendar but cwhich in fact is simply 31 text boxes. Previously
I had a form that populated 31 different subforms by looping through
the dates on the client side and it is too slow.THanks,
lq|||Erland,
My solution is to run a UDFwithin a View that creates the comma
separated list for each date of appointments. That part works fine on a
single date, and now I'm just figuring out how to loop through the
dates: first day of the month + 31 days.
Thanks.
lq|||Erland,
You got me in the right direction and the solution works very fast:

The stored procedure:

@.ClienID int,
@.dt datetime /* first day of the selected month */

AS

DECLARE @.dtEnd datetime
SET @.dtEnd = DATEADD(DAY,-1,DATEADD(MONTH,1,@.dt)) /* last day of the
selected month */

SELECT
dbo.fn_ClientSked(@.dt, @.ClientID) AS D1,
dbo.fn_ClientSked(DATEADD(DAY,1,@.dt), @.ClientID) AS D2,
dbo.fn_ClientSked(DATEADD(DAY,2,@.dt), @.ClientID) AS D3,
etc...(for D4-D28)
CASE WHEN DATEADD(DAY,28,@.dt) <= @.dtEnd THEN
dbo.fn_ClientSked(DATEADD(DAY,28,@.dt), @.ClientID) END AS D29,
CASE WHEN DATEADD(DAY,29,@.dt) <= @.dtEnd THEN
dbo.fn_ClientSked(DATEADD(DAY,29,@.dt), @.ClientID) END AS D30,
CASE WHEN DATEADD(DAY,30,@.dt) <= @.dtEnd THEN
dbo.fn_ClientSked(DATEADD(DAY,30,@.dt), @.ClientID) END AS D31

The UDF:

CREATE function dbo.fn_ClientSked(@.dtX as DateTime, @.ClientID as int)
returns
nvarchar(500)
AS
begin
declare @.ret_value nvarchar(500)
SET @.ret_value=''
Select @.ret_value=@.ret_value + '; ' + AppointmentTitle
FROM dbo.tblAppointments
WHERE
tblAppointments.ClientID = @.ClientID
AND
@.dtX Between tblAppointments.StartDate AND tblAppointments.EndDate
RETURN CASE WHEN LEN(@.ret_value)>0 THEN
RIGHT(@.ret_value,LEN(@.ret_value)-2) ELSE '' END
end

Note: This particular UDF returns all appointments by day of the month
for the same client to populate a monthly calendar. It can be easily
modified to show appointments by employee, etc.|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> You got me in the right direction and the solution works very fast:
>...
> declare @.ret_value nvarchar(500)
> SET @.ret_value=''
> Select @.ret_value=@.ret_value + '; ' + AppointmentTitle
> FROM dbo.tblAppointments
> WHERE
> tblAppointments.ClientID = @.ClientID
> AND
> @.dtX Between tblAppointments.StartDate AND tblAppointments.EndDate
> RETURN CASE WHEN LEN(@.ret_value)>0 THEN
> RIGHT(@.ret_value,LEN(@.ret_value)-2) ELSE '' END
> end

While it may work and be fast, it relies on undefined behaviour. See
http://support.microsoft.com/default.aspx?scid=287515.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland,
Now you have me worried. Because I am using a UDF within the SELECT
statement? Isn't that what they're for?
I chose this solution because the other solutions I conceived required
31 server calls to populate 31 subforms (one for each day of the week.)
With the method above the data is all shoved down the pipe as one row.
Isn't that a preferable solution where a simple string output is all
that's required for each date?
lq|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> Now you have me worried. Because I am using a UDF within the SELECT
> statement? Isn't that what they're for?

No, because of

Select @.ret_value=@.ret_value + '; ' + AppointmentTitle
FROM dbo.tblAppointments

This can give you want you want, or just one appointment. Or something
else. The result of this construct "concatentate aggregation" is not
defined.

If you want to play safe, write a cursor instead. (But you can still
do this in a UDF.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||OK, thanks for the info. I hate messing with cursors...
lqsql

Loop insert

Hi,

I will do my best to explain my question:

I have a field called CUSTOMER which contains a number such as C00001,
C0002 etc.

However i now have a new field called XCUSTOMER which is the new number that they relate to.

CUSTOMER C00001 now relates to XCUSTOMER 493845.
CUSTOMER C00002 now relates to XCUSTOMER 494343.

Basically there are hundreds of these and i dont have the time to manually enter the data. I just want to say "where customer = C00001 then insert 49494 into XCUSTOMER and then loop through and insert them all.

My table is called CUSTOMERINFO.

If anyone could help it would be much apprieciated as it would save me so much time.

Thanks:)update customerinfo
set xcustomer = case customer
when 'c00001' then 493845
when 'c00001' then 494343
...
end

loop for insert into

Hello,

I have 10 tables (T1Orj,T2Orj,…T10Orj) and I need to find modified rows from each table and insert them to T1Bak, T2Bak, …T10Bak. Although original and bak tables have the common fields , the original tables have more fields than bak tables, and T1Orj, T2Orj, … T10Orj tables have different table structures.

I can go head and write a insert into query for each table, I am just wondering Is there any way I can do this in a loop for all tables?

You can use EXEC to ececute a dynamically constructed SQL statement that would run your query on every table from a list.|||That sounds to me like a job for a trigger, inserting the old data into a backup table at the time the delete / update is done. Otherwise if this is only a single execution job you should go the way the other poster mentioned, although this is based on dynamic SQL which should be avoided in this cases.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

Not really. You can use information_schema.columns to generate these statements, but there is no real easy way to do it other than actual compiled statement.

What purpose is this? If you want to do this offline (and not with a trigger as also suggested), the easiest way to do this would be to add a rowversion(timestamp) column to the Orj tables and a binary(8) column to the bak tables (to hold the version of the timestamp in the bak table)

Then the statements would be:

--new rows
insert into <bak> (<bakColumns>)
select <orjColumnsThatBakHas>
from <orj>
where not exists (select *
from <bak>
where <orj>.key = <bak>.key)

--changed rows
update <bak>
set <bak>.col1 = <orj>.col1,
<bak>.col2 = <orj>.col2,
...
<bak>.colN = <orj>.colN,
from <bak>
join <orj>
on <org>.key = <bak>.key
where orj.rowversion <> <bak>.rowversionCopy
--or compare all columns if rowversion not a posibility

I also am guessing you don't care about deletes, but if you want to delete rows:

delete from <bak> (<bakColumns>)
where not exists (select *
from <orj>
where <orj>.key = <bak>.key)

Monday, March 12, 2012

Lookup table, extracting values

I need some help with the following...

My data source has some columns I have to 'translate' first and then insert into my destination table.

Example Source data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

has to become

1 'Y' 'P'

2 'N' 'D'

I thought of creating a lookup table (I'm talking about the real table, not a lookup transformation table) that would have these columns: column name, value_source, value_dest

Example:

col_name vl_source vl_dest

size 'Small' 'Y'

size 'Big' 'N'

height 'Tall' 'P'

... and so on, I believe you get the point

How would you extract the needed values? Can I use a select statement in a derived column expression? Any ideas? I'm not really fond of the idea to create n lookups, one for each column I have to translate, is there a slicker solution?You either need to do it in your source when you do the query, or do n lookups or embed the information in a derived column component.

Unfortunately there is no Multi lookup component.
|||

what do you exactly mean by 'embed the information in a derived column component'? Could you elaborate?
Thank you

|||

He's saying that you could either do it in source extract query like this

Code Blockselect

....
,
case

when col1='small' then 's'

when col1='medium' then 'm'

end

from
....

|||

or embeded in the derived column component like this

Code Blockcol1==""small"" ? ""s"" : col1 || col1==""medium"" ? ""m"" : col1

|||You can use multiple lookup transformations in a row to get what you need. Create that physical table and then it'll work. Add a lookup for each column you want to translate.

Then in each lookup, you'll filter your SQL query against "col_name". So for exampe, "select vl_source, vl_dest from conversion_table where col_name = 'size'")
|||

Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

|||

Sara4 wrote:
Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

Per the example above, you can do CASE logic inside the derived column, but then it'll be harder to maintain that your table approach.

|||

Anthony Martin wrote:

or embeded in the derived column component like this

Code Blockcol1==""small"" ? ""s"" : col1 || col1==""medium"" ? ""m"" : col1

Thank you for the explanation. We need to keep our decodifying dynamic (in the future small may become smallx and some new values could be entered - storing the values in a derived column would mean modifying the SSIS too).

It seems that for now the only (quick) solution is creating as many lookups as there are columns that need to be decodified.

Thank you all!

|||Sara,
You can unpivot your data as well and run it through one lookup.

So your data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

Would become:
key column value
1 'size' 'Small'
1 'height' 'Tall'
2 'size' 'Big'
2 'height' 'Short'

This would allow you to use one lookup table. Then you'd need to pivot your data back, using the returned values from the lookup.

Takes more time to setup, but it could be worth it for you.

Lookup table, extracting values

I need some help with the following...

My data source has some columns I have to 'translate' first and then insert into my destination table.

Example Source data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

has to become

1 'Y' 'P'

2 'N' 'D'

I thought of creating a lookup table (I'm talking about the real table, not a lookup transformation table) that would have these columns: column name, value_source, value_dest

Example:

col_name vl_source vl_dest

size 'Small' 'Y'

size 'Big' 'N'

height 'Tall' 'P'

... and so on, I believe you get the point

How would you extract the needed values? Can I use a select statement in a derived column expression? Any ideas? I'm not really fond of the idea to create n lookups, one for each column I have to translate, is there a slicker solution?

You either need to do it in your source when you do the query, or do n lookups or embed the information in a derived column component.

Unfortunately there is no Multi lookup component.

|||

what do you exactly mean by 'embed the information in a derived column component'? Could you elaborate?

Thank you

|||

He's saying that you could either do it in source extract query like this

Code Block

select

....

,

case

when col1='small' then 's'

when col1='medium' then 'm'

end

from

....

or embeded in the derived column component like this

Code Block

col1=="small" ? "s" : col1 || col1=="medium" ? "m" : col1

|||You can use multiple lookup transformations in a row to get what you need. Create that physical table and then it'll work. Add a lookup for each column you want to translate.

Then in each lookup, you'll filter your SQL query against "col_name". So for exampe, "select vl_source, vl_dest from conversion_table where col_name = 'size'")|||

Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

|||

Sara4 wrote:

Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

Per the example above, you can do CASE logic inside the derived column, but then it'll be harder to maintain that your table approach.|||

Anthony Martin wrote:

or embeded in the derived column component like this

Code Block

col1=="small" ? "s" : col1 || col1=="medium" ? "m" : col1

Thank you for the explanation. We need to keep our decodifying dynamic (in the future small may become smallx and some new values could be entered - storing the values in a derived column would mean modifying the SSIS too).

It seems that for now the only (quick) solution is creating as many lookups as there are columns that need to be decodified.

Thank you all!

|||Sara,
You can unpivot your data as well and run it through one lookup.

So your data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

Would become:
key column value
1 'size' 'Small'
1 'height' 'Tall'
2 'size' 'Big'
2 'height' 'Short'


This would allow you to use one lookup table. Then you'd need to pivot your data back, using the returned values from the lookup.

Takes more time to setup, but it could be worth it for you.|||

I'll try it out and let you know how it went!

Lookup table, extracting values

I need some help with the following...

My data source has some columns I have to 'translate' first and then insert into my destination table.

Example Source data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

has to become

1 'Y' 'P'

2 'N' 'D'

I thought of creating a lookup table (I'm talking about the real table, not a lookup transformation table) that would have these columns: column name, value_source, value_dest

Example:

col_name vl_source vl_dest

size 'Small' 'Y'

size 'Big' 'N'

height 'Tall' 'P'

... and so on, I believe you get the point

How would you extract the needed values? Can I use a select statement in a derived column expression? Any ideas? I'm not really fond of the idea to create n lookups, one for each column I have to translate, is there a slicker solution?

You either need to do it in your source when you do the query, or do n lookups or embed the information in a derived column component.

Unfortunately there is no Multi lookup component.

|||

what do you exactly mean by 'embed the information in a derived column component'? Could you elaborate?

Thank you

|||

He's saying that you could either do it in source extract query like this

Code Block

select

....

,

case

when col1='small' then 's'

when col1='medium' then 'm'

end

from

....

or embeded in the derived column component like this

Code Block

col1=="small" ? "s" : col1 || col1=="medium" ? "m" : col1

|||You can use multiple lookup transformations in a row to get what you need. Create that physical table and then it'll work. Add a lookup for each column you want to translate.

Then in each lookup, you'll filter your SQL query against "col_name". So for exampe, "select vl_source, vl_dest from conversion_table where col_name = 'size'")|||

Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

|||

Sara4 wrote:

Thank you, Phil, that's exactly what I've done.

It'd be great if derived column could 'take' select statements so it could be used instead of n lookups in this case...

Per the example above, you can do CASE logic inside the derived column, but then it'll be harder to maintain that your table approach.|||

Anthony Martin wrote:

or embeded in the derived column component like this

Code Block

col1=="small" ? "s" : col1 || col1=="medium" ? "m" : col1

Thank you for the explanation. We need to keep our decodifying dynamic (in the future small may become smallx and some new values could be entered - storing the values in a derived column would mean modifying the SSIS too).

It seems that for now the only (quick) solution is creating as many lookups as there are columns that need to be decodified.

Thank you all!

|||Sara,
You can unpivot your data as well and run it through one lookup.

So your data:

key size height

1 'Small' 'Tall'

2 'Big' 'Short'

Would become:
key column value
1 'size' 'Small'
1 'height' 'Tall'
2 'size' 'Big'
2 'height' 'Short'


This would allow you to use one lookup table. Then you'd need to pivot your data back, using the returned values from the lookup.

Takes more time to setup, but it could be worth it for you.|||

I'll try it out and let you know how it went!

Lookup table insert, update, and delete...

All,

Just wondering if anyone is aware of a SQL server shareware utility that places a front end on a table to manage insert, update, and delete of rows on a lookup table.

We can certainly write this but before reinventing the wheel I figure I'd ask and see.

Many Thanks,

IsaacThe DB Explorer works extensively with look-ups. You will have to define the relationship by referential constraints in the database, or as navigation paths in the client front-end.

You find all resources at MayeticVillage (http://www.MayeticVillage.com/DB-Explorer)

I'll be glad to answer all your questions about it.|||You can edit table data directly in Enterprise Manager.|||You can even use the query analyzer to maintain your data by DML statements, but I invite you to experience the ease of working with the DB Explorer.|||Gosh, I had no idea that DB Explorer was "shareware".

And all this time I've been going to the trouble of right-clicking on any table in Enterprise Manager and selecting "Open Table/Return All Rows"...|||The idea behind the DB Explorer is to exploit the essence of the relational model: the relationship between entities. To work with single tables / views is simpel, any tool including the Enterprise Manager can do so. To put a table / view into a relational context is normally the exclusive domain of client application development, but the DB Explorer is able to generate a user interface on-the-fly, including lookups.

Moreover, you can instantly use the relationships to navigate through your data: from a customer record to the related order records, or vise versa from an order to the corresponding customer. Even within the Enterprise Manager with its knowledge of the existing relations, you would have to write SQL to do so; the DB Explorer works (almost) completely without SQL in the GUI.

Actually, the DB Explorer isn't shareware but freeware. The only restriction is that it can be used in a stand-alone / Client/Server configuration only. As soon as you want to use the tool in a three-tier or/and multi-user configuration, an evaluation period of 1 month will start.

Why don't you give it a try?|||The problem I have with using enterprise mgr or query analyzer is that the users who will be doing the updates are very untechnical.

Ideally, I'd like to provide just a nice front end to the 4 or 5 tables that matter to them.

I can't overwhelm them with database names, a zillion other tables in a large list of tables, and cryptic column names etc etc etc.

Thanks,

Isaac|||Then your cheapest solution is to create an Access Data Project linked to your SQL Server database, with an entry form for each table.|||...Even cheaper would be to create updateable views and grant all permissions to those rather than the tables themselves.|||I think he is looking for a simple user interface, but it would definitely be a good idea to create the views you suggested, along with a login/role that only has access to those views, before allowing people into the database through an Access Data Project. Once they get into the .ADP file, they can get into anything their login allows, or that is poorly secured.|||The problem I have with using enterprise mgr or query analyzer is that the users who will be doing the updates are very untechnical.

Ideally, I'd like to provide just a nice front end to the 4 or 5 tables that matter to them.

I can't overwhelm them with database names, a zillion other tables in a large list of tables, and cryptic column names etc etc etc.
Isaac

Please note that the DB Explorer allows you to present just the tables you need, with functional table and field names. There is also a quite complex access control mechanism integrated, which would allow you to grant read-write access to lookup tables, and read-only access or no access at all to other tables.|||Please note that the poster requested a shareware solution. He did not ask for you to push your companys product by touting all the things it does which can also be done through Enterprise Manager.

DBForums has a Marketplace forum if you wish to advertise.|||Please note that the poster requested a shareware solution. He did not ask for you to push your companys product by touting all the things it does which can also be done through Enterprise Manager.

Which part of the freeware property or extended functionality which is definitively NOT possible with Enterprise Manager didn't you understand?|||So your software is free? My apologies...|||Apologies granted. :cool:

Donatations, however, are always appreciated.|||Curious. Why does your website offer a trial-version of DB Explorer if it is freeware?

I don't want to bother with the demo version. Please post the link where I can just download the entire thing for free.

Thanks!|||Curious. Why does your website offer a trial-version of DB Explorer if it is freeware?

I don't want to bother with the demo version. Please post the link where I can just download the entire thing for free.

Thanks!

Thank you for pointing me for that, the label is misleading. I've changed it. Note, that for trying the software, you will need to have MDAC version 2.8 or higher.

Lookup including looking up on null values possible?

In order to insert datekey values in I lookup datekey in the datedimension table. I join on the 'Date' column that contains dates. The datedimension contains one record for 'unknown date' for which the value of the 'Date' column is null.

The behavior that I desire from my lookup transformation is that for input records with a date the corresponding datekey from the datedimension is looked up and for records with date = null, the datekey for 'unknown date' is looked up.

The first part works well but the looking up on null fails, apparently because you can not say null == null. Does anyone know whether there is a setting in the lookup transformation to include null lookups?

Thnx,
HenkThe lookup transform can not do this. You would need to put a derived column in the flow and if the value is NULL then set it to the appropriate 'unknown date' value.

Thanks,|||Thanks Matt.|||In fact it can and it is quite easy! I found out in the documentation:

"A Lookup transformation that has been configured to use partial or no caching will fail if a lookup operation matches columns that contain null values, unless you manually update the SQL statement to include an OR ISNULL(ColumnName) condition. If full precaching is used, the lookup operation succeeds."

|||So by selecting the full precaching option for the lookup, you eliminate the need to modify the SQL with the ISNULL function?|||While this can work as described I would recommend against it and is, therefore, why I didn't mention it. You need to be careful if you do lookups in this way because unless you guarrantee that there is only one such value you will get the first one lookup happens to find with no warning.

Full precaching will not work because the cache is fully charged and doesn't issue the SQL statement again. The reason why partial or no cache works is because the SQL statement is issued if a match isn't found and will return success due to the ISNULL statement as long as there is a NULL in the table.

There are too many ifs and caveats to make this a good solution, IMHO.

Thanks,

Lookup component question

Hi,

i am doing a lookup to insert new records when the lookup has failed.

this works perfectly normally. however when my recordset has a name-column of type string with width 5 and my lookup-table has a name-column of char(20) the lookup will always fail and henc always inserting new records although the name "foo" should match.

is there a workaround for this, or do the compare-columns always have to be of the same type/length ?

I would say it is always a good practice to have matching datatypes. I was not aware that lookup transform treated this situation as error; but it does not surprise me.

You can add a Data conversion before the lookup to assure matching data types

Friday, March 9, 2012

Looking for Wayne Snyder

Wayne,
Taking LearnKey course SQL 2000 Implementing DB Design. In session 8 you
use a file called "bulk insert package.dts", I have looked for the script on
all of the LearnKey downloads and the CD and can not find it. Is there any
place I can get it?
Actually I was able to recreate the bulk insert package, but the rest of the
*.dts files are missing, it would be real helpful to have then to go along
with the course.
"DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
> Wayne,
> Taking LearnKey course SQL 2000 Implementing DB Design. In session 8
> you use a file called "bulk insert package.dts", I have looked for the
> script on all of the LearnKey downloads and the CD and can not find it. Is
> there any place I can get it?
>
|||hi
got his email
visit this site
http://www.solidqualitylearning.com/...AboutWayne.htm
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"DazedAndConfused" wrote:

> Actually I was able to recreate the bulk insert package, but the rest of the
> *.dts files are missing, it would be real helpful to have then to go along
> with the course.
> "DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
> news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
>
>