Showing posts with label row. Show all posts
Showing posts with label row. Show all posts

Friday, March 30, 2012

Loss of data due to conflict

hi all,
i installed merge replication successfully,after that i tried to add a row from publisher with id 2004 (it is primary key and autogenerated column)and different other columns,same like that i inserted a row from subscriber with id as 2004 and different ot
her column.when i observed after merge agent is successfull only one row is replicated the other row is failed to replicate due to conflict.this causing loss of data.please advise what i have to do to get data from both sides.
thanks®ards,
reddy
Reddy,
with merge, if you have identity columns as the PK, you need to partition
according to publisher and subscriber ie each uses its own range. Before
initialization, the publisher PK is set to be "Identity Yes (Not for
Replication)" and SQL Server will manage the seeds on publisher and
subscriber and you can define when a new seed is allocated. In your case
this doesn't seem to be happening, presumably because it is a manual setup?
If this is so, you'll need to partition the identity values yourself. Here
is an article which should help you:
http://www.mssqlserver.com/replicati...h_identity.asp
HTH,
Paul Ibison
|||paul,
thank you very much for your information.
but if i set different ranges both on publisher and subscriber the sequence will be broken.is there anyother way you would like to suggest.
thanks®ards
chandra
|||In merge. there is no other way to partition on one single PK-identity column and avoid identity conflicts, as this would mean the subscriber needs to be in contact at all times with the publisher (zero autonomy). This is possible in transactional with im
mediate updating subscribers, as the publisher itself controlls all identity values, even those on the subscriber.
As an alternative, you could make your PK 2 columns with one of them as the Site Identifier, while the other is an identity column. In this way duplicate identity values could be added and this wouldn't result in a conflict.
HTH,
Paul Ibison
|||paul,
thank you very much for your information.i go for second option that is pk 2 columns with one of them as site identifier.
i think it will works fine for my requirement.
thanks®ards
reddy

Wednesday, March 28, 2012

lop deleted row

Is it possible to find who was deleted row from sql server 2000 with t-sql,
query analyzer?No. but you can use third-party tools:
www.lumigent.com
www.red-gate.com
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Nikolami" <nikolami@.nyc.com> wrote in message
news:dv3qam$40u$1@.ss405.t-com.hr...
Is it possible to find who was deleted row from sql server 2000 with t-sql,
query analyzer?|||After the fact? Not really, SQL Server doesn't track that information. You
may be able to recover some of the transaction details by using a log
reader, e.g. see http://www.aspfaq.com/2449
"Nikolami" <nikolami@.nyc.com> wrote in message
news:dv3qam$40u$1@.ss405.t-com.hr...
> Is it possible to find who was deleted row from sql server 2000 with
> t-sql, query analyzer?
>|||Not if you don=B4t save that information somewhere, or you use a
logreader for this.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--|||Thanks guys in advice

Looping vs. Set operation - question

Hi all,
Does anyone know if it's possible to pin-point what row failed, during the s
et operation in SQL Server 2000?
For example, I would like to insert 10 records with unique IDs ranging 1-10.
Now, during the set operation, the insert fails on record whose ID is 6. I
would like to isolate this record (get its information) and continue with th
e insert.
Is this possible or am I going to have to use LOOPs (or CURSORs)?
Also, I have to know which record failed.
Thanks in advance
Goran DjuranovicIn the example below, StagingTable contains the new rows to be inserted into
ProductionTable. Those rows from StagingTable that contain IDs already in u
se will be inserted into RejectTable. By joining Staging with Production, we
can avoid the use of a cursor or looping.
-- Insert rows into ProductionTable where the ID is not already in use.
insert into ProductionTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is null
-- Insert rows into RejectTable where the ID is already in use.
insert into RejectTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is not null
"Goran Djuranovic" <djurag@.mmcREMOVE_TO_MAIL.org> wrote in message news:%23V
y2uC8QGHA.2436@.TK2MSFTNGP11.phx.gbl...
Hi all,
Does anyone know if it's possible to pin-point what row failed, during the s
et operation in SQL Server 2000?
For example, I would like to insert 10 records with unique IDs ranging 1-10.
Now, during the set operation, the insert fails on record whose ID is 6. I
would like to isolate this record (get its information) and continue with th
e insert.
Is this possible or am I going to have to use LOOPs (or CURSORs)?
Also, I have to know which record failed.
Thanks in advance
Goran Djuranovic|||Hi JT,
Thanks for your response, but you didn't answer my question. I cannot join o
n IDs, because IDs from StaginTable (table variable in my case) are auto-inc
remented when populated from an XML file, so they don't necesserally match t
he IDs from ProductionTable.
Thanks
Goran
"JT" <someone@.microsoft.com> wrote in message news:ePL0zi8QGHA.3916@.TK2MSFTN
GP11.phx.gbl...
In the example below, StagingTable contains the new rows to be inserted into
ProductionTable. Those rows from StagingTable that contain IDs already in u
se will be inserted into RejectTable. By joining Staging with Production, we
can avoid the use of a cursor or looping.
-- Insert rows into ProductionTable where the ID is not already in use.
insert into ProductionTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is null
-- Insert rows into RejectTable where the ID is already in use.
insert into RejectTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is not null
"Goran Djuranovic" <djurag@.mmcREMOVE_TO_MAIL.org> wrote in message news:%23V
y2uC8QGHA.2436@.TK2MSFTNGP11.phx.gbl...
Hi all,
Does anyone know if it's possible to pin-point what row failed, during the s
et operation in SQL Server 2000?
For example, I would like to insert 10 records with unique IDs ranging 1-10.
Now, during the set operation, the insert fails on record whose ID is 6. I
would like to isolate this record (get its information) and continue with th
e insert.
Is this possible or am I going to have to use LOOPs (or CURSORs)?
Also, I have to know which record failed.
Thanks in advance
Goran Djuranovic|||From your question, it sounded as if you have a situation where you are inse
rting from a staging table into a production table but the problem is that s
pecific rows cause the insert to fail. I was thinking perhaps the situation
is that the new row contains an ID that conflicts with the unique key constr
aint in the production table.
"Goran Djuranovic" <djurag@.mmcREMOVE_TO_MAIL.org> wrote in message news:uKEm
7QHRGHA.3916@.TK2MSFTNGP11.phx.gbl...
Hi JT,
Thanks for your response, but you didn't answer my question. I cannot join o
n IDs, because IDs from StaginTable (table variable in my case) are auto-inc
remented when populated from an XML file, so they don't necesserally match t
he IDs from ProductionTable.
Thanks
Goran
"JT" <someone@.microsoft.com> wrote in message news:ePL0zi8QGHA.3916@.TK2MSFTN
GP11.phx.gbl...
In the example below, StagingTable contains the new rows to be inserted into
ProductionTable. Those rows from StagingTable that contain IDs already in u
se will be inserted into RejectTable. By joining Staging with Production, we
can avoid the use of a cursor or looping.
-- Insert rows into ProductionTable where the ID is not already in use.
insert into ProductionTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is null
-- Insert rows into RejectTable where the ID is already in use.
insert into RejectTable
select
*
from StagingTable as S
left join ProductionTable as P
on P.ID = S.ID
where
P.ID is not null
"Goran Djuranovic" <djurag@.mmcREMOVE_TO_MAIL.org> wrote in message news:%23V
y2uC8QGHA.2436@.TK2MSFTNGP11.phx.gbl...
Hi all,
Does anyone know if it's possible to pin-point what row failed, during the s
et operation in SQL Server 2000?
For example, I would like to insert 10 records with unique IDs ranging 1-10.
Now, during the set operation, the insert fails on record whose ID is 6. I
would like to isolate this record (get its information) and continue with th
e insert.
Is this possible or am I going to have to use LOOPs (or CURSORs)?
Also, I have to know which record failed.
Thanks in advance
Goran Djuranovic|||Goran Djuranovic wrote:
> Hi JT,
> Thanks for your response, but you didn't answer my question.
That's because you didn't supply enough information. You didn't tell us
what "row failed" means. I assume you got an error message, so it would
help to state what the message is. Posting DDL also usually helps.
The point of JT's example is that if the error was a constraint
violation then you can use a query to filter out that row during the
INSERT. You can use a similar query to see which row(s) would have
failed. Since you didn't tell us what the error message is or show us
any data we can't give you the whole solution.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||The short answer is, you have to use a cursor if you want isolate the exact
row where the error occured.
JT supplied you with a process for eliminating the errors, in this case dupi
cate key errors. You can also get null errors, data type conversion errors,
and constraint errors, not to mention truncation of data which can be lost
without generating an error.
Unless you know exactly what data is coming through and can identify the exp
ected errors ahead of time, you will have to use a cursor or have your appli
cation loop therough the records and make individual calls to the database,
which amounts to the same thing (although doing it in the app is more portab
le). There is another alternative, and that is to use a DTS package to load
the data and set the max errors accordingly. When the package is done inse
rting into the table you can use a select similar to what JT provided to ide
ntify the failed rows.
"Goran Djuranovic" <djurag@.mmcREMOVE_TO_MAIL.org> wrote in message news:%23V
y2uC8QGHA.2436@.TK2MSFTNGP11.phx.gbl...
Hi all,
Does anyone know if it's possible to pin-point what row failed, during the s
et operation in SQL Server 2000?
For example, I would like to insert 10 records with unique IDs ranging 1-10.
Now, during the set operation, the insert fails on record whose ID is 6. I
would like to isolate this record (get its information) and continue with th
e insert.
Is this possible or am I going to have to use LOOPs (or CURSORs)?
Also, I have to know which record failed.
Thanks in advance
Goran Djuranovic|||Have you looked at Savepoints. They are a mechanism to roll back
portions of transactions. You create a savepoint using the SAVE
TRANSACTION savepoint_name statement, and then later execute a ROLLBACK
TRANSACTION savepoint_name statement to roll back to the savepoint
instead of rolling back to the start of the whole transaction.|||Thanks to your responses, guys. I decided to go with the SET operation, and
before doing SET insert I just validate the data (not 100% safe but what is?
:-) ).
Goran Djuranovic
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1142043315.953389.314120@.v46g2000cwv.googlegroups.com...
> Have you looked at Savepoints. They are a mechanism to roll back
> portions of transactions. You create a savepoint using the SAVE
> TRANSACTION savepoint_name statement, and then later execute a ROLLBACK
> TRANSACTION savepoint_name statement to roll back to the savepoint
> instead of rolling back to the start of the whole transaction.
>

Monday, March 26, 2012

Looping through one row at a time

Hoping for a little help... I'm attemting to call a stored proc, pass parameters, and display the data 1 record at a time. I need to be able to show the data in a series of lables or text boxes. So the user will see one record, pushed into the lables, click a button and go to the next record...so on and so forth.

I think I have the code to get the data correct, it's the displaying data in lables and looping through the recordset the has me clueless.


Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
'Put user code to initialize the page here
If Not Page.IsPostBack Then ' IF This is the first page load
Dim UserID As String = Request.QueryString("UserID")
' parameter for stored procedure
Dim RoleID As String = Request.QueryString("RoleID")

Dim DS As DataSet
Dim MyConnection As SqlConnection
Dim MyCommand As SqlDataAdapter

MyConnection = New SqlConnection(System.Configuration.ConfigurationSettings.AppSettings("connectionString"))
MyCommand = New SqlDataAdapter("getdirective", MyConnection)
MyCommand.SelectCommand.CommandType = CommandType.StoredProcedure
MyCommand.SelectCommand.Parameters.Add(New SqlParameter("@.roleID", SqlDbType.NVarChar)).Value = RoleID

Try
DS = New DataSet
MyCommand.Fill(DS)

'Display data in a series of lables or highly formated datagrid

Catch ex As Exception
Response.Write("<font color=red>Error: " & ex.Message & "</font>")

End Try

Else 'IF the page is being reloaded

End If

End Sub

Private Sub cmdAck_Click(...) Handles cmdAck.Click
'This need to loop through the records

End Sub

Thanks for any help!!!For that you need to maintain the total record number in a viewstate. And also for each click (previous, next, ..) you need to add/remove the nos respectively.

Hope it solves your problem.

Looping through each row in an XML object sent to a Stored Procedure

I have an XML object (sent as a string, received as an XML datatype) that's in a Stored Procedure.

Each row in the XML file will have 1 value from it inserted into one of three tables. The tables are depended upon the other value from the XML file.

The XML File is layed out as:

<Values>
<value>
<value>1</value>
<key>My_Field</key>
</value>
<value>
<value>3523.2</value>
<key>My_other_Field</key>
</value>
</Values
I basically need to go through it row by row, find out what table I need to insert the value into using thekeyfield.

Any help with this would rock. I'm using SQL 2005.

Tried a bit more but couldn't figure it out.

Basically I need to do something like this (logic):

Foreach row in @.xml

switch(SELECT FieldType FROM fields WHERE Name = @.Xml[key])

case :TextBox:
INSERT INTO TextFields(Value) VALUES (@.Xml[Value])
break;

case: listBox
INSERT INTO ListFields(Values) VALUES)@.Xml[Value])

... etc.

I have to check each row in the XML object sent in to see what table I need to insert the corresponding value to. I know this is possible, i'm just not sure how.

|||

Hi,

Actually you can use XMLDocument to get the data from XML file and store it into your database. See the sample below.

XmlDocument xd =new XmlDocument();xd.Load(inputurl);/// inputurl is the path of the xml file. XmlNodeList xnl = xd.GetElementsByTagName("value");for (int i = 0; i < xnl.Count; i++) { XmlNode xn0 = xnl.Item(i).ChildNodes[1]; XmlNode xn1 = xnl.Item(i).ChildNodes[0];string xn0_str = xn0.InnerText;string xn1_str = xn1.InnerText;// xn0_str stands for the table name // xn1_str stands for the value you want to insert. }
Hope that helps. Thanks.
|||

That would work, however that means I will be hitting the Database with many calls. I'd rather execute 1 call to the DataBase. Sometimes that XML file might have 100 rows in it. I don't want to hit the DB 100 times. I would rather send everything in at once, and then have the Database do all the work. It's less expensive that way.

Thanks :) I found a way to work it though using temp tables :)

|||

Hi,

Well, I know your needs. The main idea is to loop the xml file first, and take down each node's value in an array. And then build your insert statement dynamically by looping the array value.

Thanks.

Friday, March 23, 2012

Loop with calculation

I have a table which contains numerical data in a field called active_en_del. I need to loop through this table and perform a calculation where row 2 minus row 1 = and store into another field; row 3 - row 2 = and store into another field etc. How would I perform this? Thanks

I am not sure which DB you are using. You also say that you want to store the difference in different "field"s. I am not sure if by fields, you mean columns or what. Using this below approach, you can get all the results in rows:

Assuming sql server 2005, you can get rownumbers for all the rows using row_number() over(order by id)

and then do a self join.

Code Snippet

select a.col1-b.col1 from


(select col1, row_number() over(order by id) as rowid
from tblData)
a,

(select col1, row_number() over(order by id) as rowid
from tblData)

b

where a.rowid=b.rowid+1

|||My appology for not specifying. Yes it is sql 2005 and it a calculation on the same column. Thanks for the above information.sql

Loop through table using column id instead of name

I need to generate a comma delimited file that is a copy of a row in a table
.
I would like to be able to use the column id instead of the column name to d
o
this. i would like to be able to pass a table name to the stored procedure
and not have to have the column names hard coded.
Is this even possible?
I know it is possible to do in C++ or VB, but can it be done as a stored
procedure.
Thanks for your help,
KenIt seems to me it's much more efficient to do it in the calling application.
String and file handling isn't T-SQL's strong suit.
Ken Holzer wrote:
> I need to generate a comma delimited file that is a copy of a row in a tab
le.
> I would like to be able to use the column id instead of the column name to
do
> this. i would like to be able to pass a table name to the stored procedure
> and not have to have the column names hard coded.
> Is this even possible?
> I know it is possible to do in C++ or VB, but can it be done as a stored
> procedure.
> Thanks for your help,
> Ken

Wednesday, March 21, 2012

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.

Something like this maybe using a cursor or something else:

For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next

So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS

DECLARE Create_Final_Table CURSOR FOR

SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1

OPEN Create_Final_Table

FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2

WHILE @.@.FETCH_STATUS = 0
BEGIN

@.Chapter = chapter for this record

For each column in current record <-- not sure how to code this part is what I'm referring to

do some stuff here using sql for the column I'm on for this row

Next

Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record

Case 13

Insert Record
Insert Record
Insert Record

Case 11

Insert Record
Insert Record
Insert Record

Case 12

Insert Record
Insert Record
Insert Record

END

close Create_Final_Table
deallocate Create_Final_TableI need to essentially do 2 loops.Light fuse...One loops through each record and then inside each record row...stand back...I want to perform an insert on each column...cover ears...DECLARE Create_Final_Table CURSOR FOR

SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1

OPEN Create_Final_Table

FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2

WHILE @.@.FETCH_STATUS = 0
BEGIN

@.Chapter = chapter for this record

For each column in current record <-- not sure how to code this part is what I'm referring to

do some stuff here using sql for the column I'm on for this row

Next

Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record

Case 13

Insert Record
Insert Record
Insert Record

Case 11

Insert Record
Insert Record
Insert Record

Case 12

Insert Record
Insert Record
Insert Record

END

close Create_Final_Table
deallocate Create_Final_Table
KABOOM!!!!!!!!!!

Why are you doing this?|||Are you trying to normalize this beast? If so, I'd do one insert operation per column in the original table. Fast, easy, clear, simple... What's not to like?

-PatP

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row


Next
Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table

Also, if you think there is a better way to do this, let me know.

Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||

I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.

Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows

What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.

|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||

I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement

|||

SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)

insert into t1 (f1, f2)

select f1, f2 -- any computations on the columns can be done here

from tbl

....

insert into t1 (f3, f4)

select f3, f4 -- any computations on the columns can be done here

from tbl

....

So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:

-- traditional SQL way

select f1

from tbl

...

union all

select f2

from tbl

....

-- another less obvious method

select case c.c when 1 then f1 when 2 then f2 end as f

from tbl

cross join (select 1 union all select 2) as c(c)

If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.

|||

My insert will look lik this and all go into one table because that table will end up being the flat file I create

Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'

so for each column in the row, I have to insert it as a separate record into my final table.

Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:

1) receive txt comma delimited file from our vendor

2) Parse it out into an MS Access Table

3) Create an individual record for each column in each row and include the AccountID with it and some other static values

4) save it as a fixed length flat file

Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:

1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)

2) Use Conditional Split to determine which records to filter out

3) Use OLE DB Destination Editor to move in the records to a table

4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.

#4 is what I'm trying to do. I have to include the following fields for each record in my final table:

AccountID, 'a1', 'b1', ColumnName, ColumnValue

So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.

I hope this makes sense, it's not as confusing as it seems.

|||

so expanding on my last post, this may give you a sense:

Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2

|||If you are using SSIS then there is really no reason to denormalize the data in SQL Server. You can just do it in SSIS. Look at the foreach loop container in SSIS. This should allow you to loop through each column. If you have more questions about SSIS please post in the SQL Server Integration Services forum.|||ok, so then if I use the for each, how do I add my sql statement and have it refer to each column for the row I'm on?|||No. You get the data from the table as is and then perform the transformation on the client side. This is easier to do. For example, if you get a datareader for the results then you can use the columns collection with foreach container and loop through each column. If you post the question in the SSIS forum you will get more solutions.|||thanks so much|||

I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.

Loop through each record and then each field within each record

I need to essentially do 2 loops. One loops through each record and then inside each record row, I want to perform an insert on each column.
Something like this maybe using a cursor or something else:
For each record in my table (I'll just use the cursor)
For each column in current record for cursor
perform some sql based on the current column value
Next
Next
So below, all I need to do is figure out how to loop through each column for the current record in the cursor

AS
DECLARE Create_Final_Table CURSOR FOR
SELECT FieldName, AcctNumber, Screen, CaseNumber, BKYChapter, FileDate, DispositionCode, BKUDA1, RMSADD2, RMSCHPNAME_1, RMSADDR_1,
RMSCITY_1, RMSSTATECD_1, RMSZIPCODE_1, RMSWORKKPHN, BKYMEETDTE, RMSCMPNAME_2, RMSADDR1_2, RMSCITY_2, RMSSTATECD_2,
RMSZIPCODE_2, RMSHOMEPHN, BARDATE, RMSCMPNAME_3, RMSADD1_2, RMSADD2_3, RMSCITY_3, RMSZIPCODE_3, RMSWORKPHN_2
FROM EBN_TEMP1
OPEN Create_Final_Table
FETCH FROM Create_Final_EBN_Table INTO @.FieldName, @.AcctNumber, @.Screen, @.CaseNumber, @.BKYChapter, @.FileDate, @.DispositionCode, @.BKUDA1, @.RMSADD2, @.RMSCHPNAME_1, @.RMSADDR_1,
@.RMSCITY_1, @.RMSSTATECD_1, @.RMSZIPCODE_1, @.RMSWORKKPHN, @.BKYMEETDTE, @.RMSCMPNAME_2, @.RMSADDR1_2, @.RMSCITY_2, @.RMSSTATECD_2,
@.RMSZIPCODE_2, @.RMSHOMEPHN, @.BARDATE, @.RMSCMPNAME_3, @.RMSADD1_2, @.RMSADD2_3, @.RMSCITY_3, @.RMSZIPCODE_3, @.RMSWORKPHN_2
WHILE @.@.FETCH_STATUS = 0
BEGIN
@.Chapter = chapter for this record
For each column in current record <- not sure how to code this part is what I'm referring to
do some stuff here using sql for the column I'm on for this row


Next
Case @.Chapter
Case 7

Insert RecoverCodeRecord
Insert Status Code Record
Insert Attorney Code Record
Case 13
Insert Record
Insert Record
Insert Record
Case 11
Insert Record
Insert Record
Insert Record
Case 12
Insert Record
Insert Record
Insert Record
END
close Create_Final_Table
deallocate Create_Final_Table

Also, if you think there is a better way to do this, let me know.

Are you inserting from EBN_TEMP1 into multiple tables? If so then you can just use series of INSERT...SELECT statements. You need to reference the column you need in each SELECT statement.|||

I have to take every record from my select, cycle through each. So let's say I cycle to the first record in my cursor. I need to then cycle through each field in that row and take that field and do something with it.

Then move on to the next row, cycle through it's fields one by one and so on till I have done this for every row in my cursor. I just don't know how to cycle and reference each column in a unique row after each iteration of my cursor's rows

What I'll be doing wtih each colum is taking the value and inserting it into another table with some other values I'll specify in a select.

|||There must be a way to do a loop to go through each field in a cursor row, but I haven't come up with any and have searched internet forever. This is shocking that nobody has ever brought this up. All they talk about is looping through a cursor's rows or just rows in general, not how to take a row and loop through to do something with every single column (field) in the row. I have a good reason for this need so please don't ask why if you're tempted to.|||

I'm not trying to be rude whatsoever but to me that's inefficient to create multiple inserts and selects. But of course you probably didn't know that those selects and inserts would be inserting the same values, only the field value is changing in the statement at each iteration. So that's why I don't want to basically rewrite the same insert and select. I just need to loop through each and move in the value to a parameter in my insert statement

|||

SQL is not a procedural language so it is best to approach the problem with a set oriented mindset. And this is often hard to do. So if you can perform the operation efficiently using DMLs alone it is much more efficient for the engine and it is also easier for you to maintain the code. Let's take an example. (You have to provide some examples as to what you are doing in the insert. You didn't answer my question about whether you are inserting into multiple tables)

insert into t1 (f1, f2)

select f1, f2 -- any computations on the columns can be done here

from tbl

....

insert into t1 (f3, f4)

select f3, f4 -- any computations on the columns can be done here

from tbl

....

So there is nothing like looping through each column. There simply isn't any construct in TSQL or similar procedural languages in RDBMSes. On the other hand if you want to unpivot the results then you can do that using UNPIVOT operator in SQL Server 2005 or use SQL again. To use SQL to unpivot the operation of converting columns to rows then you can do something like below:

-- traditional SQL way

select f1

from tbl

...

union all

select f2

from tbl

....

-- another less obvious method

select case c.c when 1 then f1 when 2 then f2 end as f

from tbl

cross join (select 1 union all select 2) as c(c)

If you do not want to repeat the query multiple times then you can define a view or inline table-valued function or temporary table or table variables and use it instead. So there are many ways to avoid duplication of code. Best is to describe your problem rather than showing procedural code since there are many ways to perform the same set of operations in SQL much more efficiently and elegantly.

|||

My insert will look lik this and all go into one table because that table will end up being the flat file I create

Insert into table1 'a1', 'b1', @.ColumnName, @.ColumnValue, 'IO'

so for each column in the row, I have to insert it as a separate record into my final table.

Yes, this is inefficient but I have to do this for our stupid ERP system which whose UI only can map updates based on individual field records from a flat file....don't ask me why, it's retarted. they will take my flat file an use it in conjunctiuon with the ERP Import GUI to do so, I just have to create the flat file. Before the process was:

1) receive txt comma delimited file from our vendor

2) Parse it out into an MS Access Table

3) Create an individual record for each column in each row and include the AccountID with it and some other static values

4) save it as a fixed length flat file

Now I'm automating this process for them using SQL Server 2005 Integration Services. My flow is like this:

1) Use Flat File Source to import the comma delimmeted txt file (650,000 records)

2) Use Conditional Split to determine which records to filter out

3) Use OLE DB Destination Editor to move in the records to a table

4) Use a SQL TASK to code the splitting out of each field of each row into a new record in my final table. The final table will be used to create the fixed length flat file in the end.

#4 is what I'm trying to do. I have to include the following fields for each record in my final table:

AccountID, 'a1', 'b1', ColumnName, ColumnValue

So in other words for each row in my table that the OLE DB added my records to, I then have to split out each column for each row into a final table including the account D for every row.

I hope this makes sense, it's not as confusing as it seems.

|||

so expanding on my last post, this may give you a sense:

Let's say the OLE DB moves my records into a table initially for step 3. The table now looks something like this:
Acct # Zip Phone Addr
11223 23232 333-444-5555 6556 Duns Rd.
12345 34343 222-444-3333 1000 Aspire Blvd.
I need to create a record using the Acct # and column for each column as well as append some other values like this into a final table. That final table will be a flat file in the end, I just need to figure out how to get this done first.
11223 23232 othervalue1 othervalue2
11223 333-444-5555 othervalue1 othervalue2
11223 6556 Duns Rd. othervalue1 othervalue2
12345 34343 othervalue1 othervalue2
12345 222-444-3333 othervalue1 othervalue2
12345 1000 Aspire Blvd. othervalue1 othervalue 2

|||If you are using SSIS then there is really no reason to denormalize the data in SQL Server. You can just do it in SSIS. Look at the foreach loop container in SSIS. This should allow you to loop through each column. If you have more questions about SSIS please post in the SQL Server Integration Services forum.|||ok, so then if I use the for each, how do I add my sql statement and have it refer to each column for the row I'm on?|||No. You get the data from the table as is and then perform the transformation on the client side. This is easier to do. For example, if you get a datareader for the results then you can use the columns collection with foreach container and loop through each column. If you post the question in the SSIS forum you will get more solutions.|||thanks so much|||

I am following your advice on an SSIS package I have that must evaluate each record. The issue I am having is that the dataReader destination is far slower then the recordset destination. Problem is I can not figure out how to get data from the record set.

Loooping through SELECT values in SQL

Hi,

Environment - VB.NET, ASP.NET, SQL Server 2000.

In a SQL User-Defined Function, I am selecting a row which returns multiple values. I need to construct one single string out of those returned values. To do that, I am using CUROSR.

Now, CURSOR is expensive operation. If there are 1000 users at a time, it will consume lot of resources.

Is there a way, I can construct this String without using CURSORs??

Please advice. Thanks

PankajYou should be able to do something like this to concatenate the columns together:


DECLARE @.myResult VARCHAR(8000)
SET @.myResult = ''
SELECT
@.myResult = @.myResult + myColumn1 + myColumn2 + myColumn3 + myColumn4
FROM
myTable

Terri|||my mistake, the multiple values selected are from the same column (multiple rows)|||That's OK. you can still use the same method:

DECLARE @.myResult VARCHAR(8000)

SET @.myResult = ''

SELECT
@.myResult = @.myResult + myColumn1
FROM
myTable

Terri|||Even though the described SQL works this not a supported T-SQL approach. As per the documentation deom BOL on Select statement :
:: If the SELECT statement returns more than one value, the variable is assigned the last value returned.::

And such an approach is dangerous and can be removed in future builds or service packs. Donot rely on them.

Check out a thread posted by Umachander (MVP, SQL Server) on the same at the public.sqlserver.programmming newsgroup. http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=e7dcU%23jiAHA.2088%40tkmsftngp03

There are many other approaches to the same problem. Some of the solutions have been outlined at : http://www.extremeexperts.com/sql/articles/IterateTSQLResult.aspxsql

Lookups and their error flows

So I have three lookups in a row in my data flow. Basically they are doing data quality checks for me using a reference table.

I want to be able to take the error flows of the three lookups and merge them together (union all) so that I can insert the "errors" (or non matches) into a table.

Can't do it. Because SSIS deems non-matches as "errors" you automatically get the errorCode and errorColumn fields. When you try to union a lookup error output with another lookup's error output, you can't do it.

What I would like to see is a lookup act more like a conditional statment where you have three outputs of a lookup table: match found, no match found, and error. Either that, or I'd like to be able to edit the names of the errorCode and errorColumn fields.

Am I missing something here, or do I need to just add an OLE destination for each lookup error flow when I only want one? 'Course the problem then is that I want to count the number of rows that are in "error" across all of the lookups.

Phil Brammer wrote:


Can't do it. Because SSIS deems non-matches as "errors" you automatically get the errorCode and errorColumn fields. When you try to union a lookup error output with another lookup's error output, you can't do it.

Phil,

What do you mean with 'Can't do it'. Are you actually receiving an error?

I was able to set that up; two error outputs from lookup transformations going to an Union All; it seems to work; but that was with a simple test scenario I made. Yes, ErrorColumn and ErrorCode were there as well but is up to you to use them or remove them from the pipeline. What is actually no there is the lookup value but I would not expect to be since the lookup fail.

Another way to accomplish something like this is to configure lookups to 'ignore errors' and then at the end to use a conditional split to send the rows having nulls on the lookup values column to the error branch of the dataflow. Just a thought

Rafael Salas

|||User error... You made me think about it again, and when I looked at what I was trying to do, I realized my mistake. Never-the-less, what I was doing was taking the error "redirect" from the lookup and adding a new column with a derived column transformation. This was repeated on the other lookup error flows. Then my mistake (an accident!) was taking the *error* output of the derived column transformations instead of the data flow output and pushing them to the union all transformation.|||

oh, I see!. BTW, in my previous post I meant ignore (instead redirect) error as an alternative way. I corrected it though.

Rafael salas

sql

Monday, March 19, 2012

Lookup with multiple hits

In a Lookup component I've defined a SQL query which returns a sorted resultset. For each Lookup component input row I want to have a single output row. Problem is that for each input row there is possibility of multiple matches in SQL query resultset. From all of the possible multiple hits I want only the first one to be returned, and if no match is found then no output row. How to implement this?

Try changing your lookup query to only return the rows you are interesting in.

Code Block

select

colA

,colB

,max or min (colC)

from

tableA

group by

colA

,colB

|||

The lookup component does this by default does it not?

Edit: that is return only the first result that it runs into. Just make sure that you return the correct row first and you should be fine...

Try the following to prove:

select 19000101 as datekey, '01/01/1900' as datename

UNION

select 19000102 as datekey, '01/02/1900' as datename

UNION

select 19000103 as datekey, '01/03/1900' as datename

UNION

select 19000104 as datekey, '01/04/1900' as datename

UNION

select 19000105 as datekey, '01/05/1900' as datename

in an oledb source

and then in the lookup

select 19000101 as datekey, 'myname11' as name

UNION

select 19000102 as datekey, 'myname21' as name

UNION

select 19000103 as datekey, 'myname31' as name

UNION

select 19000104 as datekey, 'myname41' as name

UNION

select 19000101 as datekey, 'myname12' as name

UNION

select 19000102 as datekey, 'myname22' as name

UNION

select 19000102 as datekey, 'myname23' as name

Notice, there is no match for 19000105, it will redirect.

You will return the values

19000101, '01/01/1900', 'myname11'

19000102, '01/02/1900', 'myname21'

19000103, '01/03/1900', 'myname31'

19000104, '01/04/1900', 'myname41'

|||

Eric Wisdahl wrote:

The lookup component does this by default does it not?

I believe so. The message about duplicate values is just a warning, not an error. I don't think there's a guaranteed order -- it just picks the first one it comes across.|||

Eric Wisdahl wrote:

The lookup component does this by default does it not?

Yes. Lookup is a synchronous component; hence the number of rows in the output is the same than the number of rows in the input. If your lookup query returns more than one row for an incoming row; then 'the first' one would be used; and you don't have control over which one would be used.|||SQL query returns result set of multiple contact persons for multiple companies. On output I need for each company to filter out just one of the contact persons. Result set is sorted (ORDER BY) so that if many contact persons are found per comapny one contact person that should be chosen as highest in order. On input of the Lookup component I've put OLE_SRC component which fetches all the companies.

I couldn't use just GROUP BY because I need columns in resultset which aren't used in aggregate function nor should be be group on.
|||

I would not feel comfortable relying in SSIS picking the 1st from the list. If the query in the lookup component is against SQL Server, Oracle or other RDBS where the rank function I would use a query like:

http://rafael-salas.blogspot.com/2007/04/remove-duplicates-using-t-sql-rank.html

Notice that you could mimic the result set even without the rank() function; but the query could get little complex|||Great, thank you all, especially to Rafael.

In an OLE_SRC I manually entered SQL command. I had to do it manually because it seems that Query Builder doesn't (yet) support OVER construct - when I click Build Query on OLE_SRC component it says "The OVER SQL construct or statement is not supported.".
|||

Is your source sql server 2005?

If not, you'll have to use another technique to get the row number

Code Block

select

colA

,(

select count(*)

from tableA b

where b.colA <= a.colA

) as RowNum

from tableA a

|||Yes, the source is SQL Server 2005. OLE DB Source component has Query Builder GUI which doesn't support OVER construct. I entered query manually and it works.

Lookup transform not finding blank match

I am having problems with a lookup transformation. I have a row in my lookup table for blank ('') source data. If I test the join using SQL the match is made, but the Lookup transform doesn't consider it a match and sends it to error output. Is there a property that I don't have set correctly or something else I am forgetting?

Check that you are not using empty string lookup against spaced "empty" strings or vice versa.

|||

I created the lookup table (dimension) and prepared the source table so I do know they are both empty strings.

Let me give a little more information. I have a package that runs prior to the fact table load (lookups). The package runs through the source and adds any dimension records that are not present. The problem arises when the lookup encounters an empty string from the source and it is unable to match on the empty string record already in the dim table, so it sends the record to the error output to be written to the dim table. But the write fails because the field is a primary key in the dim table and because the record exists it raises an error. In other words the Lookup transform can't see the empty string record, but the Write Transform can.

I hope that this explanation is understandable.

Dave

|||

Dave,

I've noticed when I pull back a string from tables, the string is padded with spaces to the full size of the field.

Try two things to see if one of these might be your problem:

1) After you pull in your columns from the source, add a derived column that concatenates a single character (like 'a') before and after the column that is return the empty string. Use a data viewer after this column, to see if the column looks like this: 'aa' or this 'a a'.

If it's the latter, change your derived column to trim the string, or trim the column when pulling it from the source.

2) In your initial lookup where you are trying to see if the empty string ('') exists, try using a SQL statement that selects the columns you are looking up. In your SQL statement, concatenate a single character in the same manner as step 1 to the empty string column and add it to your output. Then use a data viewer after this column to see what the column looks like.

If it's 'a a', trim your column in your SQL statement before returning it.

Good luck!

Jessica

|||

Like Jessica said, use the dataviewer to check the data.

SSIS is using .net string compare on strings during lookup process, which think '' and ' ' is different, but sql is ignoring trailing empty spaces during select joins.

If your oledb command insert statement check if the data exist before insert, then you may always insert ' ' into the database but always ignore the ''.

|||

Check out

http://blogs.conchango.com/kristianwedberg/archive/2006/02/22/2955.aspx

especially the comments at the end - you're not alone :-)

Cheers/Kristian

Lookup task with NOLOCK

I have an SSIS package that unpivots data - each row from the source becomes 20-24 rows in the destination db. In the dataflow it looks up some type-data IDs before inserting into the destination table. The whole process flies through the data at an incredible rate, however I find that the final commit on data insertion takes a very long time, and two of the lookup tasks remain yellow throughout. It appears that there may be a contention issue because the final table has FKs to those lookup tables.

In T-SQL it is possible to do SELECT..... WITH (NOLOCK), but when I modified the SQL statement of the lookup task by adding this clause, it caused an error "Incorrect syntax near the keywork 'with'."

Is there any way around this problem? THNX!

PhilSky

You could split the data-flow into 2, using raw files to pass data between them. This would alleviate blocking problems (if indeed that is the issue).

Incidentally, I have a request for enhancements to the LOOKUP component that means this would be even easier. Feel free to vote for these enhancements here: http://lab.msdn.microsoft.com/productfeedback/viewfeedback.aspx?feedbackid=58d0100b-aa34-4191-ae03-7655543614a1 and leave a comment as well if you could.

-Jamie

|||I've tried that and it works fine what is your SQL statement you are trying to use. The other option is to change the transaction isolation level for the dataflow

Monday, March 12, 2012

Lookup on input

I need to validate my input rows. The row is valid if there exist some other input rows in the same table (I am importing data from excel and access). I'll give an example to make everything clear:

Input table boys has following columns:First_Name ,Surname and Date_of_birth.

Output table is Twin_Triple_More_Brothers. I would like to insert into this table only boys that surnames are equal and difference in date of birth is less then one day.

I was thinking about lookup component, but I cannot use it in that way (or I just do not know how).

Maybe someone has an idea how to do this?
Thanks for help.

You could try something like this...

-Send your source to a lookup with the lookup query on the surnames and the date of birth

-Send error rows of this lookup to another lookup with the lookup query on the surnames and the date of birth-1

-Send the successfull lookups from the two lookup components to a union all component

-Send the output of the union all component to your destination

|||Use a lookup to match on Surname. Then return Date_of_birth from the lookup table.

Next, hook up to a derived column to calculate the difference between the input Date_of_birth and the lookup Date_of_birth.

Hook that up to a conditional split, where you test the calculation from above. If it's within your parameters, send to the insert path, otherwise do nothing.|||But how can I lookup in input? I know how to search in other table, but I do not know how to search in input columns.
Again thanks for help.
|||

Piotr Stapp wrote:

But how can I lookup in input? I know how to search in other table, but I do not know how to search in input columns.
Again thanks for help.

Create an initial data flow that simply takes your input and loads it to a table. Then in your second data flow, you can use an OLE DB source hooked up to that staging table, having a lookup pointing to the same table. You might have to work through the matching logic a bit, but it could work for you.|||So the only way is to use temporary table?
Maybe it is not as good as I think, but it is also not so bad.
|||You could do this in SQL, probably, but this isn't the forum for that... Wink

Lookup error

Hi,

I'm using a Lookup object, but it dosn't works. Anybody knows this error?

[Lookup [29018]] Error: Row yielded no match during lookup.

[Lookup [29018]] Error: The "component "Lookup" (29018)" failed because error code 0xC020901E occurred, and the error row disposition on "output "Lookup Output" (29020)" specifies failure on error. An error occurred on the specified object of the specified component.

[DTS.Pipeline] Error: The ProcessInput method on component "Lookup" (29018) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.

[DTS.Pipeline] Error: Thread "WorkThread0" has exited with error code 0xC0209029.

It means exactly what it says. The pipeline value that you were looking up doesn't exist in your LOOKUP dataset.

If finding no match is a valid business scenario you can configure the component to ignore these errors. Edit the "Lookup Output" output which currently is set to "Fail component on error".

-Jamie

|||Thanks!

Lookup current id in Kimball Type II dimension for fact rows

Hi all,

Maybe someone here can help me out: I have a Kimball type II dimension, where i track changes in a hierarchy. Each row has a RowStartDate and RowEndDate property to indicate from when to when a certain row should be used.

Now i want to load facts to that table. So each fact will have a certain date associated with it that i can use to lookup the right Id (a certain SourceId can have mulitiple integer Ids when there are historic changes) and then load the facts.

Is there a building block I can use for that? I could do this with SQL scripts but the client would prefer to have as much as possible done in SSIS. The Lookup transformation will only let me specify an equal (inner join where A=B) join, but i need equal for one column (SourceId) and then >= and <= (RowStart and RowEnd) to find the right row version.

Any ideas? Thanks in advance,

John

That question have been asked before. There are several works arround to that problem, here is one:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=406485&SiteID=1

You may want to try a search with 'SSIS range lookup'

|||

That's it, excellent! I was googling on the wrong words.

Thanks,

John

lookup and commit; How it works

Lookup and commit; How it works
I am importing data from text file and I have data lookup to check reference table.

If the reference row doesn't exist I want to add row to reference table, other wise row is added to the detail table.

I am using oledb destination to saving reference table row with rows per batch to 1 and maximum insert commit size to 1.

When I run the package duplicate rows are in grid view. How can duplicates end in there the commit size is 1? Next time the data exists in reference table and should be going to detail not to reference table.

Funny but this was just answered in another thread about the lookup cache. The dataflow works on buffers not rows so the lookup transform gets a whole buffer to process at one time. It then looks up every row in this buffer and after that passes it on downstream. So any row with the same key in that buffer will be looked up in the same manner and sent to the same output for processing. No setting on another transform can change this. Furthermore, depending on the cache type the lookup will load all its cache into memory (full and partial cache) so it will never (or not reliably) be updated by any changes to the underlying data.

HTH,

Matt

|||Well, I can guess that. How to fix this? add another transform? where? after lookup?|||

Well, you can't fix it. This is the design of the dataflow. You can try some workarounds such as setting the lookup to no cache mode or the dataflow's max rows to a low number (note that setting it to 1 doesn't work because the dataflow does rounding in order to not waste memory). However, none of these are certain to work in all cases. The only option is to write your own custom or script component that has logic to circumvent the buffering.

Matt

|||

HI, I have similar problem. Setting lookup to no cache or the dataflow to low number of max row did not work at all. I ended up building my own asynchronous sript component in order to achieve what the lookup or SCD wizard should be able to do.

Ccote

|||

We are currently investigating a ton of ideas around lookup, and this pattern is one we are taking into consideration.

However, I do want to point out that adding such a pattern requires us to add more logic to the lookup component and we are somewhat cautious about that. SSIS components are typically very tightly scoped in their functionality - design patterns are built with several different smaller components rather than in one more complex "catch all component." Lookup us an example of a component that does one fairly isolated function - complete integration patterns are built around it using other components.

The pattern we are dicussing here is actually more complicated than might first be thought - especially when one looks at the possibility of errors creeping in through system problems or bad data. (I have never in my career seen a data integration process where unexpected errors of all sorts did not creep in, so I tend to be cheerfully pessimistic in my designs.)

I'll recap the current requirement as follows, to make it clear for other readers:

We want to lookup the key from an incoming record in a reference table.|||

Great!!! It's really helpful. I have broken the process into two steps; first step add to reference table and next step lookup works(the way it should).

I can understand the import will be slow if every row is commited before look up, but there should be option, sometime it's should work logically and of course slow is not a word these days with powerful machines.

Thanks again Donald for your response .

Wednesday, March 7, 2012

Looking for suggestions. XML dataset

We have a table where each row holds an XML dataset. I need to extract the
dataset from each row and use the data in a SQL report server report. I
don't know a great deal about XML so, some suggestions would be great.
thanks
Start reading everything on www.sqlxml.org ...
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
"jerod" <jerod@.synergybusiness.com> wrote in message
news:%234XJpmezEHA.748@.TK2MSFTNGP14.phx.gbl...
> We have a table where each row holds an XML dataset. I need to extract the
> dataset from each row and use the data in a SQL report server report. I
> don't know a great deal about XML so, some suggestions would be great.
> thanks
>
|||Could you please elaborate on what it means " where each row holds an XML
dataset."... does this mean you store the Xml serialization of a dataset in
a given column?
Andrew Conrad
Microsoft Corp
http://blogs.msdn.com/aconrad/