Showing posts with label bit. Show all posts
Showing posts with label bit. Show all posts

Friday, March 30, 2012

Losing double quotes?

Hi,

I have written a little bit of VB.NET code that basically takes three strings, transforms them, and returns a single string to be stored in my table.

I am running into a strange problem, however... for some reason, a number of my processed rows are missing a pair of double quotes (").

The vast majority of the records are formatted properly, and have the double quotes in the expected locations.

The most frustrating thing about it is that I have included the offensive input strings in my Test.sql test script, and when I step through the entire routine, the return value is perfect...

i apologize for being the worst ever at posting questions here, please let me know if i can add anything

Could you please post your code, its hard to uess what is goind wron without seeing your code.

Jens K. Suessmeyer

http://www.sqlserver2005.de

|||

it is over 500 lines, but i could certainly post it once i get to work...

in the meantime, however, perhaps this might be useful:

I set a breakpoint at my function's final return statement and have observed the return value as follows:

Author of "Some random book." Thoughts and frustrations. Publisher: Me (San Diego). Published in 2007.

This is how I'd like, and expect, the value to appear in my database, but after SELECTING the particular row, I get:

Author of Some random book. Thoughts and frustrations. Publisher: Me (San Diego). Published in 2007.

I doubt this is much more revealing, and will be happy to post my code later, if thought to be any help.

|||

Another thing to add (probably useless):

The problem only occurs during an UPDATE. If I do:

SELECT dbo.EventBuilder([Title], [Author], [Published]) FROM MyTable

I get the correct output.

...getting desperate here

|||

Check your database hold the same result.

I can't able to understand where & what is your problem occurs..

Wednesday, March 28, 2012

LOOPING UPDATE

Hi
I just require a bit of guidance on a SQL query I am writing. I am updating
a table with values, and the first field is a TYPE field, I set this to A
and then populate fields 2-15 with a variety of default updates and values
from other tables, I then do a second insert whreeby I set the same TYPE
field to B and update fields 2-5 and then the unique fields 16-20.
At the moment my second update has two issues;
1. It sets EVERY type to B (although it correctly doubles the amount of
entries in the table)
2. The B entries are appended to the bottom of the table, ideally I want the
table structure to be ABABABAB etc
Any help to a SQL newbie appreciated!> 1. It sets EVERY type to B (although it correctly doubles the amount of
> entries in the table)
Sounds like your WHERE clause may be at fault. Could you post a CREATE TABLE
statement and the UPDATE/INSERT statements so that we can reproduce your
problem.

> 2. The B entries are appended to the bottom of the table, ideally I want
the
> table structure to be ABABABAB etc
Tables have no logical ordering. If you want to see the results in a
particular order then use ORDER BY on a SELECT statement when you query the
table. A clustered index orders data in physical storage but not necessarily
when you query the table.
David Portas
SQL Server MVP
--|||Hi David
I have rewritten the part of the routine with a separate WHERE clause on the
type field at the end of the update and this seems to work so thank you.
I have several more loops to write, I will then try the Order by at the end
of the routine.
Thank you very very much for your quick and informative response, it really
is appreciated.
Steve
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:8ZKdncKsYuPgDrHdRVn-uA@.giganews.com...
> Sounds like your WHERE clause may be at fault. Could you post a CREATE
TABLE
> statement and the UPDATE/INSERT statements so that we can reproduce your
> problem.
>
> the
> Tables have no logical ordering. If you want to see the results in a
> particular order then use ORDER BY on a SELECT statement when you query
the
> table. A clustered index orders data in physical storage but not
necessarily
> when you query the table.
> --
> David Portas
> SQL Server MVP
> --
>sql

LOOPING UPDATE

Hi
I just require a bit of guidance on a SQL query I am writing. I am updating
a table with values, and the first field is a TYPE field, I set this to A
and then populate fields 2-15 with a variety of default updates and values
from other tables, I then do a second insert whreeby I set the same TYPE
field to B and update fields 2-5 and then the unique fields 16-20.
At the moment my second update has two issues;
1. It sets EVERY type to B (although it correctly doubles the amount of
entries in the table)
2. The B entries are appended to the bottom of the table, ideally I want the
table structure to be ABABABAB etc
Any help to a SQL newbie appreciated!> 1. It sets EVERY type to B (although it correctly doubles the amount of
> entries in the table)
Sounds like your WHERE clause may be at fault. Could you post a CREATE TABLE
statement and the UPDATE/INSERT statements so that we can reproduce your
problem.
> 2. The B entries are appended to the bottom of the table, ideally I want
the
> table structure to be ABABABAB etc
Tables have no logical ordering. If you want to see the results in a
particular order then use ORDER BY on a SELECT statement when you query the
table. A clustered index orders data in physical storage but not necessarily
when you query the table.
--
David Portas
SQL Server MVP
--|||Hi David
I have rewritten the part of the routine with a separate WHERE clause on the
type field at the end of the update and this seems to work so thank you.
I have several more loops to write, I will then try the Order by at the end
of the routine.
Thank you very very much for your quick and informative response, it really
is appreciated.
Steve
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:8ZKdncKsYuPgDrHdRVn-uA@.giganews.com...
> > 1. It sets EVERY type to B (although it correctly doubles the amount of
> > entries in the table)
> Sounds like your WHERE clause may be at fault. Could you post a CREATE
TABLE
> statement and the UPDATE/INSERT statements so that we can reproduce your
> problem.
> > 2. The B entries are appended to the bottom of the table, ideally I want
> the
> > table structure to be ABABABAB etc
> Tables have no logical ordering. If you want to see the results in a
> particular order then use ORDER BY on a SELECT statement when you query
the
> table. A clustered index orders data in physical storage but not
necessarily
> when you query the table.
> --
> David Portas
> SQL Server MVP
> --
>

Wednesday, March 21, 2012

Loop Question

Sorry if this is a bit of a newbie question.

I have three tables.

DECLARE @.TempLineStatusXref TABLE(
Run_Id int,
Line varchar(50),
master_unit_id int,
Start_Time datetime,
End_Time datetime,
Duration numeric(18,9),
Team varchar(50),
Excluded int
)

CREATE TABLE dbo.#TempVars(
Var_Id INTEGER,
Data_Type_Id INTEGER,
User_Defined1 VARCHAR(10),
PU_Order INTEGER,
PU_Id INTEGER,
RequiredinReport INTEGER)

DECLARE @.Required TABLE (
Var_id int,
Run_id int
)

In Table @.TempLineStatusXref table I have 8 records... 2 of them have a value of Excluded = 1... the other 6 have a value of Excluded = 0.

In Table #TempVars I have 334 values all of which are required.

What I want to do is for Each record in @.TempLineStatusXref that has a 0 in the Excluded table add the field Var_id from #TempVars and the Run_id from @.TempLineStatusXref.

So I should end up with (334 * 6) = 2004 records in @.Required

I appreciate any help.

Mark

Mark:

Try one of these two alternatives:

insert into @.required
select a.var_id,
b.run_id
from #tempVars a
inner join @.TempLineStatusXref b
on b.excluded = 0

or

insert into @.required
select a.var_id,
b.run_id
from #tempVars a
cross join @.TempLineStatusXref b
where b.excluded = 0

|||

yeap..

Insert into @.Required
Select Var_Id, Run_Id
From @.TempLineStatusXref,#TempVars
where Excluded = 0

it worked... thanks!!!!

Monday, March 19, 2012

lookup to check values in 13 columns with same values

hi,

it is my first post on this forum, please be patient if i miss any important bit of information.

i am transporting data from a legacy system into mssql 2k5 using SSIS.
among those column of a dataset there are 13 columns, all necessary for operational reasons, that i need to ensure data consistance.

i believe i could do this check using the lookup data flow item, but surely there must be a way to do it in a more streamlined fashion.

since column names contain numbers to distinguish the version, eg; col01, col02, col03 .. col13.

i thought i could include the lookup within a loop and use a couple of variables to do this trick, but since i have not done it before i am asking for some sort of guidance from a guru among you folks.

please let me know if further clarification is necessary.

regards,

nicolas

If you are trying to confirm that the 13 values for each row in the source match the corresponding row in the destination, then a lookup is the way to go. Using a single lookup with the 13 columns all mapped makes sense. Trying to do something dynamic is not going to work, and even if it did this would mean 13 individual lookup calls, so probably a lot slower due to the extra overhead.

Another method when you have lots of columns to compare is to use a hash or checksum across the columns. You can compute this checksum in the pipeline on the incoming source, and compare that against the checksum stored in the destination. This means the lookup is only one column, so for wide column data and lots of them it will be faster due to the reduced data being transferred, and it is easier to just select columns for checksum generation compared to mapping in the lookup. This does mean that your destination needs to be expaned to store the checksum.

A checksum transform is available here (http://www.sqlis.com/default.aspx?21) or you can implement your own in the Script Component.

|||darren,

i understand your point about looking through 13 columns been slow, but i am not certain if i understand how to hash all those columns to compare their values.

for clarification, i need to map or rather insure that those values in all 13 columns will comply with a foreign key constraint, thus i am ensuring that only values listed in my reference table exist in those columns.

would this scenario allow me to use hash for comparing values?

many thanks|||

I may have misunderstood this. If the 13 columns are the key for one table, then the hash concept will work. the hash serves as a compund key for those 13 columns,. If they are 13 different reference tables, then i messed up, you woudl use 13 lookups, one to check each table, probably what you meant first time around. There is no shortcut for these 13 lookups if that is what you require.

I digress a bit, but a good strategy can be to just load the data and let it fail on the FK violation. Set the component to re-direct any failed rows to the error output. You may then need to do your lookups or whatever you want to do when you have missing reference data. This can be much faster than lookups for every row. If 100 rows are loaded, but only 1 is a new key, the error output will get 1 row. This means 1 lookup (fix work) as opposed to 100.

|||

Hi Darren,

In the example - you have assumed that the base tables contain the Foreign Key column values - What if we need to lookup on a secondary table, Obtain the primary key from the secondary table and populate them in the base table column.

Is it beneficial to handle these relations in SQL JOIN Statements while fetching records from the source tables rather than doing the lookup using LookUP Components in the SSIS pipeline (which I believe works at record level - meaning a bulk lookup is impossible.)

Please share your comments.

Thanks,
Loonysan

|||Ok,

since i have similar situation with 4 other dataset, i would like to probe this idea further as well as maek sure that i clarify it so we understand this situation properly.

this source dataset that i extracted from our legacy data source has these 13 columns (col1..col13) among many.

in on our destination system, i created a table (lookup table) which will enable me to map and transform legacy codes into relevant new codes in our destination table.

the dataset is intented to be inserted into a table that has 13 foreign keys (one for each of those 13 columns) which references another table, thus ensuring that only new codes will be inserted in any of those 13 columns.

having said that, you believe that i should join the lookup table, try to insert into the destination table & catch any error?

many thanks

Monday, March 12, 2012

Lookup cachetype = none question

HI, I use a lookup transform on one of my dataflow. My data look a bit like this (the actual data is more complicated, kit is a combination of type 1 - description and type 2 - code):

ID Code Description
-- -
1 AAA PRODUCT1
2 AAA PRODUCT2
3 AAA PRODUCT3
4 BBB PRODUCT4

The problem is simple: I would like to insert ID 1 and update it with subsequent rows that have the same code.

If the lookup transform finds a match, the row is updated, else, the row is inserted (using the error path of the lookup) via an OLE_DB command. The lookup cachetype is set to none. My problem is all rows are inserted. But if I use a second lookup that gives me the ID using the Code column, the second lookup sees the inserted data.

My question is why the second lookup is able to find out the inserted data while the first one cannot? The SCD wizard cannot resolve this either. I resolved this by using an asynchronous script component and manage the logic in there. But, still, a lookup with no cache (roud trip to the DB every time) should be able to do the job.

Thank you,
Ccote

Generally this is because the dataflow works on buffers not rows. All the rows in a buffer are processed by a component before the rows are passed on to the next component. So the 1st lookup doesn't find the data because it most likely hasn't actually been inserted yet, while the 2nd one finds it because the data has actually been inserted.

HTH,

Matt