Showing posts with label records. Show all posts
Showing posts with label records. Show all posts

Friday, March 30, 2012

Loss of records

Hi, I have the following problem
In a flat file I have 250000 rows when I them go on to the DB only 249995 come, 5 got lost

Not where it can spend the mistake the loggind does not say anything of mistake
Not which can be the reason of the problem
If someone save for that it can be spending this?

helps please.

If you execute the package in BIDS you see how many rows are output from each component. This should make it very easy to see where the rows are being lost from.

-Jamie

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last recor
d generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is ge
nerated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
Message posted via http://www.sqlmonster.com
Use the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.c om...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per minute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last record generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is generated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
--
Message posted via http://www.sqlmonster.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in a
n MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each
record contains a unique sequential number generated by the machine control
software. Data is logged using SQL INSERT scripts in the application (Wond
erware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprie
tary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operation
al fault, and stops. This brings up a window on the control screen that req
uires the Operator to manually enter data, and an UPDATE statement is execut
ed to modify the last recor
d generated. Occasionally when this update is processed, a single record wi
ll be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the u
pdate for one station was somehow locking an index in the table, we separate
d the tables so that each station has its own table. Since the station is s
topped, no new record is ge
nerated for that station until after the update is processed. The other sta
tions can still be running, so they are generating INSERT commands, which co
uld coincide with the UPDATE command. Both commands use the same connection,
which is always open.
We still occasionally lose ONE record in one or more of the other tables whe
n the UPDATE executes.
Any thoughts?
Message posted via http://www.droptable.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via droptable.com" <forum@.droptable.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQ
droptable.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.droptable.comsql

Wednesday, March 28, 2012

Loosing SQL connection while running ASP

We are running an ASP application. That is fairly simple.

It selects from one table (15.000 records), and updates into another table.
But when we execute the ASP script, it reports the following error, after
completing between 70%-80%. This varies each time it runs

"Microsoft OLE DB Provider for SQL Server error '80004005'
[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access
denied. "

The application is running in ths following server environment

Server 1

Fujitsu-Siemens RX300 Server (Dual 2,8 Xeon, 1 Gb Ram, 72 Gb SCSI Raid 1)

Windows 2003 Web Server (IIS)

Server 2

Fujitsu-Siemens RX300 Server (Dual 2,8 Xeon, 1 Gb Ram, 72 Gb SCSI Raid 1)

Windows 2003 Standart server

Microsoft SQL Server 2000Jesper Carstensen (jesper@.swush.com) writes:
> We are running an ASP application. That is fairly simple.
> It selects from one table (15.000 records), and updates into another
> table. But when we execute the ASP script, it reports the following
> error, after completing between 70%-80%. This varies each time it runs
>
> "Microsoft OLE DB Provider for SQL Server error '80004005'
> [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access
> denied. "

So how exactly do you retrieve the rows and submit the updates? For
retrieval I would recommend client-side cursors.

Is the table you update on the same server as the one you read from?
In such case, the best may be to have all the updating login in a stored
procedure, unless the logic is too complex for SQL.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||We are seeing that EXACT same error popup randomly on our web server
running Windows 2003 Server. Here's our setup:

Web Server 1 - Windows 2000 Server - Latest SP, MDAC, All Updates, etc.
Web Server 2 - Windows 2003 Server - All updates
SQL Server - Windows 2003 Server, SQL Server 2003 SP3a, All Updates

Now, we used to have both our web servers be 2000 server. We've
recently updated the 2nd one to be 2003 server. Ever since we have
been getting that same error (below) at random times. We seem to think
that it may fail under heavy load.

"Microsoft OLE DB Provider for SQL Server error '80004005'
[DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or
access denied. "

We've done just about everything we've seen suggested here in the
groups and on Microsoft's site without any success. We can't nail down
what is causing this problem. If ANYBODY can help point us in the
right direction we'd greatly apprecaite it. What's odd is that I've
seen many posts reporting this same problem without any solutions
given. It seems that maybe there's some kind of issue with 2003 that
hasn't been documented or exposed yet.|||This would be my checklist.. (1 and 2 are probably ruled out given that the
2000 server works with no problem)

one clue you may have to help you is the amount of time the web server takes
to throw that error when you access the page. Is it trying to connect and
then giving up after a few seconds, or is it failing immediately without
attempting to handshake with the server.

1) Check the DB server loading, increase connection timeout if its heavy for
extended periods.
2) continuously ping the DB server from the web server(s) to see if there
are connection outages/packet loss.
3) Try a different(older) version of MDAC on the web server, fiddle with the
client settings.
4) Try varying the connection type (SQL logon Windows logon Named Pipes
TCP/IP)
5) Run IIS under a different isolation level, fiddle with the application
settings
7) trap the error in the connection code and attempt a re-connect (3 strikes
and your out).
6) try something else ...
8) get the 2000 server back out

hope you find a solution to the problem, let us know if you do. :)

Mr Tea
http://mr-tea.blogspot.com

"Don Kitchen" <donkitchen@.gmail.com> wrote in message
news:1106327170.237818.86870@.c13g2000cwb.googlegro ups.com...
> We are seeing that EXACT same error popup randomly on our web server
> running Windows 2003 Server. Here's our setup:
> Web Server 1 - Windows 2000 Server - Latest SP, MDAC, All Updates, etc.
> Web Server 2 - Windows 2003 Server - All updates
> SQL Server - Windows 2003 Server, SQL Server 2003 SP3a, All Updates
> Now, we used to have both our web servers be 2000 server. We've
> recently updated the 2nd one to be 2003 server. Ever since we have
> been getting that same error (below) at random times. We seem to think
> that it may fail under heavy load.
> "Microsoft OLE DB Provider for SQL Server error '80004005'
> [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or
> access denied. "
> We've done just about everything we've seen suggested here in the
> groups and on Microsoft's site without any success. We can't nail down
> what is causing this problem. If ANYBODY can help point us in the
> right direction we'd greatly apprecaite it. What's odd is that I've
> seen many posts reporting this same problem without any solutions
> given. It seems that maybe there's some kind of issue with 2003 that
> hasn't been documented or exposed yet.|||"Don Kitchen" <donkitchen@.gmail.com> wrote in message
news:1106327170.237818.86870@.c13g2000cwb.googlegro ups.com...
> We are seeing that EXACT same error popup randomly on our web server
> running Windows 2003 Server. Here's our setup:
> Web Server 1 - Windows 2000 Server - Latest SP, MDAC, All Updates,
etc.
> Web Server 2 - Windows 2003 Server - All updates
> SQL Server - Windows 2003 Server, SQL Server 2003 SP3a, All Updates
> Now, we used to have both our web servers be 2000 server. We've
> recently updated the 2nd one to be 2003 server. Ever since we have
> been getting that same error (below) at random times. We seem to
think
> that it may fail under heavy load.
> "Microsoft OLE DB Provider for SQL Server error '80004005'
> [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or
> access denied. "
> We've done just about everything we've seen suggested here in the
> groups and on Microsoft's site without any success. We can't nail
down
> what is causing this problem. If ANYBODY can help point us in the
> right direction we'd greatly apprecaite it. What's odd is that I've
> seen many posts reporting this same problem without any solutions
> given. It seems that maybe there's some kind of issue with 2003
that
> hasn't been documented or exposed yet.

I've seen the same problem and I think it's specifically a memory
problem. It may be a more general "resource" issue correlated with
memory but it is hard to pin down.|||I'm happy to report that we have found the fix to our problems.
Without going into too much detail here, I can say that 2 registry keys
that we created in Windows 2003 fixed the problem.

See these two threads for more info about it:

http://groups-beta.google.com/group...57109e74649c7c8

http://groups-beta.google.com/group...075084961f5cdb7

Loosing SQL connecting while running ASP

We are running a fairly simple ASP application, that connect to a MS SQL ser
ver.
It selects from one table (15.000 records), and updates into another table.
But when we execute the ASP script, it reports the following error, after co
mpleting between 70%-80%. This varies each time it runs
“Microsoft OLE DB Provider for SQL Server error '80004005' [DBNETLIB][ConnectionOpen
(Connect()).]SQL Server does not exist or access denied. “
The application is running in ths following environment
Server 1
Fujitsu-Siemens RX300 Server(Dual 2,8 Xeon, 1 Gb Ram, 72 Gb SCSI Raid 1)
Windows 2003 Web Server
Server 2
Fujitsu-Siemens RX300 Server(Dual 2,8 Xeon, 1 Gb Ram, 72 Gb SCSI Raid 1)
Windows 2003 standart server
Microsoft SQL server 2000I would recommend getting network traces on both the IIS machine and the
SQL Server while the ASP page is being called. The error you posted is
related to a new connection, not an existing query. Increasing the
Logintimeout in code may help, but it sounds like either a coding issue or
Name Resolution problem.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.|||Same problem after installing IIS6.0,
after a lot of tries, I have switched DB connection to ODBC,
I have the problem with OLEDB, I did not face the problem with ODBC.|||
quote:
Originally posted by mrgray
Same problem after installing IIS6.0,
after a lot of tries, I have switched DB connection to ODBC,
I have the problem with OLEDB, I did not face the problem with ODBC.


hi guys , any leads with the problem ? because i am stuck with this issue to
o hosting my script and MS SQL on seperate server
i am also running win2k3 and IIS6
DEAD END

looping through xml document with OpenXML

Hi,
I am trying to compare records from an XML document to records in a table.
If the records in the XML document are not in the table then I wish to
insert those records.
How do I loop through the XML document within a store proc?
Has anyone done this who can provide a sample.
Thanks
Rather than looping, it'd probably be more efficient to use a NOT IN clause.
Have a look at the following example and see if you can adapt it for your
data:
USE Northwind
GO
DECLARE @.productsDoc nvarchar(2000)
SET @.productsDoc = '
<Products>
<Product ProductID="1">
<ProductName>Chai</ProductName>
<UnitPrice>10</UnitPrice>
</Product>
<Product ProductID="101">
<ProductName>Porridge</ProductName>
<UnitPrice>16</UnitPrice>
</Product>
<Product ProductID="102">
<ProductName>Haggis</ProductName>
<UnitPrice>19</UnitPrice>
</Product>
</Products>'
DECLARE @.h integer
EXEC sp_xml_preparedocument @.h OUTPUT, @.productsDoc
SET IDENTITY_INSERT Products ON
INSERT Products (ProductID, ProductName, UnitPrice)
SELECT * FROM
OPENXML(@.h, 'Products/Product', 2)
WITH
(
ProductID integer '@.ProductID',
ProductName nvarchar(40),
UnitPrice money
)
WHERE ProductID NOT IN
(SELECT ProductID FROM Products)
SET IDENTITY_INSERT Products OFF
Hope that helps,
Graeme
--
Graeme Malcolm
Principal Technologist
Content Master Ltd.
www.contentmaster.com
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:8CF49B69-9E84-4B7D-96AE-C2E0FCE854E2@.microsoft.com...
Hi,
I am trying to compare records from an XML document to records in a table.
If the records in the XML document are not in the table then I wish to
insert those records.
How do I loop through the XML document within a store proc?
Has anyone done this who can provide a sample.
Thanks
|||Thanks Graeme.
Looks like your code should work and I will give it a try.
"Graeme Malcolm" wrote:

> Rather than looping, it'd probably be more efficient to use a NOT IN clause.
> Have a look at the following example and see if you can adapt it for your
> data:
> USE Northwind
> GO
> DECLARE @.productsDoc nvarchar(2000)
> SET @.productsDoc = '
> <Products>
> <Product ProductID="1">
> <ProductName>Chai</ProductName>
> <UnitPrice>10</UnitPrice>
> </Product>
> <Product ProductID="101">
> <ProductName>Porridge</ProductName>
> <UnitPrice>16</UnitPrice>
> </Product>
> <Product ProductID="102">
> <ProductName>Haggis</ProductName>
> <UnitPrice>19</UnitPrice>
> </Product>
> </Products>'
> DECLARE @.h integer
> EXEC sp_xml_preparedocument @.h OUTPUT, @.productsDoc
> SET IDENTITY_INSERT Products ON
> INSERT Products (ProductID, ProductName, UnitPrice)
> SELECT * FROM
> OPENXML(@.h, 'Products/Product', 2)
> WITH
> (
> ProductID integer '@.ProductID',
> ProductName nvarchar(40),
> UnitPrice money
> )
> WHERE ProductID NOT IN
> (SELECT ProductID FROM Products)
> SET IDENTITY_INSERT Products OFF
> Hope that helps,
> Graeme
> --
> --
> Graeme Malcolm
> Principal Technologist
> Content Master Ltd.
> www.contentmaster.com
>
> "Opa" <Opa@.discussions.microsoft.com> wrote in message
> news:8CF49B69-9E84-4B7D-96AE-C2E0FCE854E2@.microsoft.com...
> Hi,
> I am trying to compare records from an XML document to records in a table.
> If the records in the XML document are not in the table then I wish to
> insert those records.
> How do I loop through the XML document within a store proc?
> Has anyone done this who can provide a sample.
> Thanks
>
>

Monday, March 26, 2012

Looping through records

Hello,

I have following VB code, in which I open a table and check whether there are records:

Dim SQL_CONNECTION_NTFNS As New SqlConnection
Dim SQL_COMMAND_NTFNS As New SqlCommand
Dim SQL_DATASET_NTFNS As New DataSet
Dim SQL_ADAPTER_NTFNS As New SqlDataAdapter(SQL_COMMAND_NTFNS)

SQL_CONNECTION_NTFNS.ConnectionString = str_DBSE_Connect
SQL_COMMAND_NTFNS.Connection = SQL_CONNECTION_NTFNS
SQL_COMMAND_NTFNS.CommandText = "SELECT * FROM Table"

Try
SQL_COMMAND_NTFNS.Connection.Open()
SQL_ADAPTER_NTFNS.Fill(SQL_DATASET_NTFNS)

Dim SQL_READER_NTFNS As SqlClient.SqlDataReader = SQL_COMMAND_NTFNS.ExecuteReader
With SQL_READER_NTFNS.Read
If SQL_READER_NTFNS.HasRows Then
?
SQL_READER_NTFNS.NextResult()
End If
End With
Catch
' Catch Something
Finally
' Dispose everything
End Try

At the ? I want to loop through the records and perform some actions, until it reaches the end (EOF?), thus a 'do while until loop'.

How can I do this?

Hello Seppe001,

Not sure what this has to do with Reporting Services however, you can use following code to loop through the records:

If SQL_READER_NTFNS.HasRows Then
while(SQL_READER_NTFNS.Read())
'your actions
end while
End If

|||Txs Alain, that helped!!

Looping through records

Hi,
I have a table with papers and second table with types of papers. Now
beetwen papers and types there is many to many relationship.
But in my application I would like to display records the way:
eg.
paper1, type1 type2 ... typex
paper2, type2
paper3, type3 type5
..
I don't know how to create view, procedure or function to create loop
through types for one paperid.
something like
for i = 0 to counter(numeber of records for paperid)
string_types = string_types + type
next record
Thank you
*** Sent via Developersdex http://www.examnotes.net ***Look at some examples that I'd not relied on it :-))))
Seriously, such kind of reports you will be better doing on the client side
create table w
(
id int,
t varchar(50)
)
insert into w values (1,'abc')
insert into w values (1,'def')
insert into w values (1,'ghi')
insert into w values (2,'ABC')
insert into w values (2,'DEF')
select * from w
create function dbo.fn_my ( @.id int)
returns varchar(100)
as
begin
declare @.w varchar(100)
set @.w=''
select @.w=@.w+coalesce(t,'')+',' from w where id=@.id
return @.w
end
select id,
dbo.fn_my (dd.id)
from
(
select distinct id from w
)
as dd
drop function dbo.fn_my
<schapopa> wrote in message news:uw4Wrl8%23FHA.272@.tk2msftngp13.phx.gbl...
> Hi,
> I have a table with papers and second table with types of papers. Now
> beetwen papers and types there is many to many relationship.
> But in my application I would like to display records the way:
> eg.
> paper1, type1 type2 ... typex
> paper2, type2
> paper3, type3 type5
> ..
> I don't know how to create view, procedure or function to create loop
> through types for one paperid.
> something like
> for i = 0 to counter(numeber of records for paperid)
> string_types = string_types + type
> next record
> Thank you
> *** Sent via Developersdex http://www.examnotes.net ***|||schapopa (schapopa) writes:
> I have a table with papers and second table with types of papers. Now
> beetwen papers and types there is many to many relationship.
> But in my application I would like to display records the way:
> eg.
> paper1, type1 type2 ... typex
> paper2, type2
> paper3, type3 type5
> ..
> I don't know how to create view, procedure or function to create loop
> through types for one paperid.
> something like
> for i = 0 to counter(numeber of records for paperid)
> string_types = string_types + type
> next record
> Thank you
In SQL 2000, the only reliable way to do this is to run a cursor. Uri showed
you an example with a function, but that relies an undefined behaviour, and
may not produce the correct result.
In SQL 2005, there is syntax for this, thanks to the XML support. (It's
somewhat obscure, but it works.)
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thank you for your help. I used function that Uri showed me as an
example. It works properly.
Cheers
Schapopa
*** Sent via Developersdex http://www.examnotes.net ***|||Thank you for your help. I used function that Uri showed me as an
example. It works properly.
Cheers
Schapopa
*** Sent via Developersdex http://www.examnotes.net ***

Friday, March 23, 2012

Loop through records?

I am new to stored proc.
For eg, in ASP or VB6, we usually use a Do While ...rs.MoveNext...Loop to go through all the records in a table.
What is the equivalent of doing it in stored procs?
Please advise.
Thanks.Of course you can use cursor but it is much better to use rowset statemets - it depens on what do you want to do with your data.|||Thanks snail.

I just figured out cursors through trial and error by reading the books online.

i also just finished reading thru news groups.

many says that cursors are harmful and hogs down system resources? because this script will be run once a user logs into the system. what will be the alternative to cursors? there is an alternative way which i heard is to actually pass some data into temp tables?

pls advise.

here are the scripts i wrote:

CREATE procedure usr_clearworkerdp as
declare @.wid varchar(10)
declare workercursor cursor for
select worker_id from worker where datediff(year, date_of_clearance,getdate() ) = 0 or datediff(year, date_of_clearance,getdate() ) >0
open workercursor
fetch next from workercursor into @.wid
while @.@.FETCH_STATUS = 0
begin
update violations_committed set dp = '0' where worker_id = @.wid
fetch next from workercursor into @.wid
end
close workercursor
deallocate workercursor
GO|||If your update is as simple as the code you wrote, can't you just do this?

UPDATE worker
SET dp = '0'
WHERE DATEDIFF(year, date_of_clearance, GETDATE()) >= 0

There is nothing about your update that would require doing anything one row at a time.

Rob|||Originally posted by rgarrison
If your update is as simple as the code you wrote, can't you just do this?

UPDATE worker
SET dp = '0'
WHERE DATEDIFF(year, date_of_clearance, GETDATE()) >= 0

There is nothing about your update that would require doing anything one row at a time.

Rob

the date_of_clearance belongs to the worker table. the logic is to check the dates, then if more than a year, update dp column in the violations_committed table.|||ur update changes to : -

update b
set b.dp = 0
from worker a,violations_committed b
where DATEDIFF(year, a.date_of_clearance, GETDATE()) >= 0
and a.worker_id = b.worker_id

;)

Monday, March 19, 2012

Lookup tables

For a "tblFormObject" table whose every single record links to many different
records in "tblFormInstances" table, I think I want
tblFormObject.frmId as Primary Key
tblFormInstances.frmId as Foreign Key
tblFormInstances.InstId as Primary Key
However, in the tblFormInstances table I also have a lookup column called
tblFormInstances.statId which should link to
tblFormStatuses:
tblStatId PrimaryKey
tblStatDescription
Obviously, tblFormInstances.statId is not a primary key in tblFormInstances
as different form instances can have the same form status! But How could I
set the relationship' I want tblFormInstances.StatId as primary and
tblFormStatuses.tblStatId as foreign' However, when I try to create such an
relationship with Enterprise Manager, ticking all options except cascade
update/delete, I get errror:
The columns in table 'tblFormInstances' do not match an existing
primary key or UNIQUE constraint
How could I resolve this? What keys do I need/want?Hi
I think that:
tblFormInstances.statId has a foreign key to tblFormStatuses:tblStatId
In the same way that
tblFormInstances.frmId has a Foreign Key to tblFormObject.frmId
It seems you are trying to create the foreign key for statId the wrong way
around!
John
"Patrick" wrote:
> For a "tblFormObject" table whose every single record links to many different
> records in "tblFormInstances" table, I think I want
> tblFormObject.frmId as Primary Key
> tblFormInstances.frmId as Foreign Key
> tblFormInstances.InstId as Primary Key
> However, in the tblFormInstances table I also have a lookup column called
> tblFormInstances.statId which should link to
> tblFormStatuses:
> tblStatId PrimaryKey
> tblStatDescription
> Obviously, tblFormInstances.statId is not a primary key in tblFormInstances
> as different form instances can have the same form status! But How could I
> set the relationship' I want tblFormInstances.StatId as primary and
> tblFormStatuses.tblStatId as foreign' However, when I try to create such an
> relationship with Enterprise Manager, ticking all options except cascade
> update/delete, I get errror:
> The columns in table 'tblFormInstances' do not match an existing
> primary key or UNIQUE constraint
> How could I resolve this? What keys do I need/want?|||Patrick,
tblFormInstances needs foreign key linking to both tblFormObject via frmId
and to tblFormStatuses via StatId.
hth
Quentin
"Patrick" <questions@.newsgroup.nospam> wrote in message
news:E155F114-94C9-42B8-9108-F86C3B824BA0@.microsoft.com...
> For a "tblFormObject" table whose every single record links to many
> different
> records in "tblFormInstances" table, I think I want
> tblFormObject.frmId as Primary Key
> tblFormInstances.frmId as Foreign Key
> tblFormInstances.InstId as Primary Key
> However, in the tblFormInstances table I also have a lookup column called
> tblFormInstances.statId which should link to
> tblFormStatuses:
> tblStatId PrimaryKey
> tblStatDescription
> Obviously, tblFormInstances.statId is not a primary key in
> tblFormInstances
> as different form instances can have the same form status! But How could
> I
> set the relationship' I want tblFormInstances.StatId as primary and
> tblFormStatuses.tblStatId as foreign' However, when I try to create such
> an
> relationship with Enterprise Manager, ticking all options except cascade
> update/delete, I get errror:
> The columns in table 'tblFormInstances' do not match an existing
> primary key or UNIQUE constraint
> How could I resolve this? What keys do I need/want?|||I thought so too!
tblFormInstances.statId as primary key table
tblFormStatuses.statID as foreign key table
I have under Enterprise Manager for tblFormInstances:
Relationship name: FK_tblFormStatuses_tblFormInstances:
Primary Key Table: tblFormInstances statId
Foreign Key Table: tblFormStatuses statId
Check existing data on creation: ticked
Enforce relationship for replication: ticked
Enforce relationship for INSERTs and UPDATEs: ticked
Cascades: NOt ticked
but I get error saying "The columns in table 'tblFormInstances' do not match
an existing primary key or UNIQUE constraing"
Try swapping the table the other way round does not help either.
"Quentin Ran" wrote:
> Patrick,
> tblFormInstances needs foreign key linking to both tblFormObject via frmId
> and to tblFormStatuses via StatId.
> hth
> Quentin
>
> "Patrick" <questions@.newsgroup.nospam> wrote in message
> news:E155F114-94C9-42B8-9108-F86C3B824BA0@.microsoft.com...
> > For a "tblFormObject" table whose every single record links to many
> > different
> > records in "tblFormInstances" table, I think I want
> > tblFormObject.frmId as Primary Key
> > tblFormInstances.frmId as Foreign Key
> > tblFormInstances.InstId as Primary Key
> >
> > However, in the tblFormInstances table I also have a lookup column called
> > tblFormInstances.statId which should link to
> > tblFormStatuses:
> > tblStatId PrimaryKey
> > tblStatDescription
> >
> > Obviously, tblFormInstances.statId is not a primary key in
> > tblFormInstances
> > as different form instances can have the same form status! But How could
> > I
> > set the relationship' I want tblFormInstances.StatId as primary and
> > tblFormStatuses.tblStatId as foreign' However, when I try to create such
> > an
> > relationship with Enterprise Manager, ticking all options except cascade
> > update/delete, I get errror:
> > The columns in table 'tblFormInstances' do not match an existing
> > primary key or UNIQUE constraint
> >
> > How could I resolve this? What keys do I need/want?
>
>|||Hi
It would be easier to understand if you posted the DDL
http://www.aspfaq.com/etiquett­e.asp?id=5006
once you are used to the T-SQL syntaxes it is often easier to write the code
rather than mess around with the EM gui!
This does not sound correct:
Primary Key Table: tblFormInstances statId
Foreign Key Table: tblFormStatuses statId
Try making it
Primary Key Table: tblFormStatuses statId
Foreign Key Table: tblFormInstances statId
John
"Patrick" wrote:
> I thought so too!
> tblFormInstances.statId as primary key table
> tblFormStatuses.statID as foreign key table
> I have under Enterprise Manager for tblFormInstances:
> Relationship name: FK_tblFormStatuses_tblFormInstances:
> Primary Key Table: tblFormInstances statId
> Foreign Key Table: tblFormStatuses statId
> Check existing data on creation: ticked
> Enforce relationship for replication: ticked
> Enforce relationship for INSERTs and UPDATEs: ticked
> Cascades: NOt ticked
> but I get error saying "The columns in table 'tblFormInstances' do not match
> an existing primary key or UNIQUE constraing"
> Try swapping the table the other way round does not help either.
> "Quentin Ran" wrote:
> > Patrick,
> >
> > tblFormInstances needs foreign key linking to both tblFormObject via frmId
> > and to tblFormStatuses via StatId.
> >
> > hth
> >
> > Quentin
> >
> >
> > "Patrick" <questions@.newsgroup.nospam> wrote in message
> > news:E155F114-94C9-42B8-9108-F86C3B824BA0@.microsoft.com...
> > > For a "tblFormObject" table whose every single record links to many
> > > different
> > > records in "tblFormInstances" table, I think I want
> > > tblFormObject.frmId as Primary Key
> > > tblFormInstances.frmId as Foreign Key
> > > tblFormInstances.InstId as Primary Key
> > >
> > > However, in the tblFormInstances table I also have a lookup column called
> > > tblFormInstances.statId which should link to
> > > tblFormStatuses:
> > > tblStatId PrimaryKey
> > > tblStatDescription
> > >
> > > Obviously, tblFormInstances.statId is not a primary key in
> > > tblFormInstances
> > > as different form instances can have the same form status! But How could
> > > I
> > > set the relationship' I want tblFormInstances.StatId as primary and
> > > tblFormStatuses.tblStatId as foreign' However, when I try to create such
> > > an
> > > relationship with Enterprise Manager, ticking all options except cascade
> > > update/delete, I get errror:
> > > The columns in table 'tblFormInstances' do not match an existing
> > > primary key or UNIQUE constraint
> > >
> > > How could I resolve this? What keys do I need/want?
> >
> >
> >

Monday, March 12, 2012

Lookup Error handling in SSIS

Hi,

I am new to using SSIS. I need to know how can I retrieve the records in a Lookup component that cause an error to use them in a Data Transfer task. I created the error event handler but I don't know how to retrieve the records causing the error to use them in the Data Transfer task.

Thanks in advance for help!

Thanks,

Aref

I tried this and it worked,

I will redirect the record in the error configuration to be the data source to the data destination.

Thanks,

Are

|||Just to be clear for anyone not sure of this, the error flow is not always nasty errors, it can be good stuff to. Think of error as the non-default condition perhaps. For a lookup, error rows are those that the lookup failed to match, so often when loading a table that has rows in already, you would the Redirect option to send "new" rows down the error output, and then insert them.

Lookup component question

Hi,

i am doing a lookup to insert new records when the lookup has failed.

this works perfectly normally. however when my recordset has a name-column of type string with width 5 and my lookup-table has a name-column of char(20) the lookup will always fail and henc always inserting new records although the name "foo" should match.

is there a workaround for this, or do the compare-columns always have to be of the same type/length ?

I would say it is always a good practice to have matching datatypes. I was not aware that lookup transform treated this situation as error; but it does not surprise me.

You can add a Data conversion before the lookup to assure matching data types

Friday, March 9, 2012

Lookup - no matched records

Hi All,

I have two tables:TableA and TableB, both of the two tables have two fields: c_IDA char(10) and c_IDB char(10); A import text file includes the ID data, the data will insert into TableB only when the ID existed in TableA

The line in the import text file like this:

00000023450000012345

in the text file: 1-10 is the c_IDA and 11-20 is the c_IDB

In the Derived Column transformation,

set the column name IDA as expression: SUBSTRING(LINE,1,10)

set the column name IDB as expression: SUBSTRING(LINE,11,10)

In the followed Lookup transfornation, I created the reference table with a sql: Select c_IDA,c_IDB from TableA, the IDA and IDB in the pipeline linked to the reference table's c_IDA and c_IDB, i then setup the error output to another log file.

The problem is even though the ID existed in the TableA, the Lookup always generate the error out put, that means the ID not been found in the TableA at all.

In the sample above, if i run the sql in the SSMS:

Select * from TableA where c_IDA = '0000002345' and c_IDB = '0000012345'

there is one record retrived.

Any idea?

TIA

Are you casting the Derived Column to a DT_STR to match your char(10), or is it still set to DT_WSTR?

|||

I am using DT_STR

Thanks

|||

Have you used a data viewer immediately before the Lookup to validate that the values are what you expect?

Another potential issue is the collation settings on the database. Do you know what they are?

|||

Thanks again.

I put the data viewer before the Lookup and the data are displayed correctly. I am using the digit to present string, is it related with collation?

|||Collation affects sorting. It's one of the setting made when the SQL Server is set up, and under 2005, it can be configured for each database individually. It affects string comparisons, in some cases.

|||

But why I am able to retrive the correct records via Select in SSMS?

jwelch wrote:

Collation affects sorting. It's one of the setting made when the SQL Server is set up, and under 2005, it can be configured for each database individually. It affects string comparisons, in some cases.

|||

Because there the entire string is being interpeted by the database engine, versus in SSIS where some data is coming in from a flat file, and the other data is retrieved from the database engine.

That being said, I don't know for certain that this is your problem, but it is something to check.

|||

Whe i use Select SERVERPROPERTY(N'Collation'), it returns: Latin1_General_CI_AS

That means:

Case-insensitive, accent-sensitive, kana-insensitive, width-insensitive

is that?

jwelch wrote:

Because there the entire string is being interpeted by the database engine, versus in SSIS where some data is coming in from a flat file, and the other data is retrieved from the database engine.

That being said, I don't know for certain that this is your problem, but it is something to check.

|||

That one is unlikely to be causing this problem.

Lookup - full caching vs partial caching vs no caching

I needed to do lookup on tables with approx 1 million records (how else do I know if record already exists?).

Full caching:
SSIS caches all million records (consuming around 1 GB RAM) and cripples my lowly system with only 1GB physical RAM. This works for now, but the lookup tables will keep on growing. This means that full pre-caching may not scale for this scenario.

Partial caching and No caching:
I don't really know how SSIS works in these modes. I can make my guesses but it will be great if someone can explain those. The behavior that I see when I use partial cache of varying sizes or when I use no cache at all is that SSIS loads one buffer size of rows (6040) from the source and processes them. Then it loads next 6040 rows from the source. Ofcourse, due to cache limit, processing slows down considerably. Problem with this approach is that the source can have anywhere from 300,000 records to 2 million records. Processing them at 6040 rows at a time at the reduced speed due to caching, takes a long long time. Ofcourse, the system doesn't cripple. But the duration to execute the same package gets magnified a lot (I have not collected statistics yet but it is close to order of 10 i.e. 5 mins with full pre-caching become 40-50 mins or maybe more).
Any suggestions? Ideas? Experience with partial cache?

thanks,
NiteshNitesh,

Partial and no cache are very similar. In no cache the lookup makes a round trip to the database for each lookup. For partial case the lookup sees if the value to lookup is in the case, if so then it uses it. If not then it makes a round trip to the database and adds that record to the cache. If the cache is full then LRU is used to remove an entry so this new one can be added.

So it seems you have several options to evaluate:

1. Order your data so that the lookup hits the cache a lot.
2. Create a temp lookup table that contains the most frequently hit lookups (if this is indeed the case in your data sets) and do a full cache on this, then use an error output from that lookup to go to a partial or no cache lookup for the ones that don't hit in the full cached frequent items lookup.
3. Use a MergeJoin as an alternative as MergeJoin can be used to do a lookup. Generally it is not as performant due to the lookup's caching but it may be if you are using partial or no cache due to memory constraints.

HTH,
Matt|||Of course you must bear in mind that partial and no cache options do not join correctly if there are any null values :(
(This is a bug that is designed in, and documented as so.)
|||I've found that by writing the sql to select the fileds you require rather than checking the boxes from the whole list of available fields for the table the lookup cahces a lot less data. It appears that even if you only select a couple of columns from say a 10 column table i.e 1 filed to be returned and one field for the join (selected from the right table) the lookup still pulls back the whole table rather than just the selected fields it requires.

Therefore if you write the select to pull back only the fields that your require, for example it could be a key field for the join and an email address to be returned by the lookup. Then when you go to the second tab you will only see these two fields in the table on the right side. You will find that your lookup runs a lot quicker and consumes a lot less memory|||How does one turn on "partial caching" for a lookup transformation?
|||

Set CacheType="Partial"

CacheType is a property of the LOOKUP transform.

-Jamie

|||Ah, yes. I was looking for it on the Edit dialog, not the property page. I sometimes forget to look in both places.

Thanks,
Dan

Lookup - full caching vs partial caching vs no caching

I needed to do lookup on tables with approx 1 million records (how else do I know if record already exists?).

Full caching:
SSIS caches all million records (consuming around 1 GB RAM) and cripples my lowly system with only 1GB physical RAM. This works for now, but the lookup tables will keep on growing. This means that full pre-caching may not scale for this scenario.

Partial caching and No caching:
I don't really know how SSIS works in these modes. I can make my guesses but it will be great if someone can explain those. The behavior that I see when I use partial cache of varying sizes or when I use no cache at all is that SSIS loads one buffer size of rows (6040) from the source and processes them. Then it loads next 6040 rows from the source. Ofcourse, due to cache limit, processing slows down considerably. Problem with this approach is that the source can have anywhere from 300,000 records to 2 million records. Processing them at 6040 rows at a time at the reduced speed due to caching, takes a long long time. Ofcourse, the system doesn't cripple. But the duration to execute the same package gets magnified a lot (I have not collected statistics yet but it is close to order of 10 i.e. 5 mins with full pre-caching become 40-50 mins or maybe more).
Any suggestions? Ideas? Experience with partial cache?

thanks,
NiteshNitesh,

Partial and no cache are very similar. In no cache the lookup makes a round trip to the database for each lookup. For partial case the lookup sees if the value to lookup is in the case, if so then it uses it. If not then it makes a round trip to the database and adds that record to the cache. If the cache is full then LRU is used to remove an entry so this new one can be added.

So it seems you have several options to evaluate:

1. Order your data so that the lookup hits the cache a lot.
2. Create a temp lookup table that contains the most frequently hit lookups (if this is indeed the case in your data sets) and do a full cache on this, then use an error output from that lookup to go to a partial or no cache lookup for the ones that don't hit in the full cached frequent items lookup.
3. Use a MergeJoin as an alternative as MergeJoin can be used to do a lookup. Generally it is not as performant due to the lookup's caching but it may be if you are using partial or no cache due to memory constraints.

HTH,
Matt|||Of course you must bear in mind that partial and no cache options do not join correctly if there are any null values :(
(This is a bug that is designed in, and documented as so.)
|||I've found that by writing the sql to select the fileds you require rather than checking the boxes from the whole list of available fields for the table the lookup cahces a lot less data. It appears that even if you only select a couple of columns from say a 10 column table i.e 1 filed to be returned and one field for the join (selected from the right table) the lookup still pulls back the whole table rather than just the selected fields it requires.

Therefore if you write the select to pull back only the fields that your require, for example it could be a key field for the join and an email address to be returned by the lookup. Then when you go to the second tab you will only see these two fields in the table on the right side. You will find that your lookup runs a lot quicker and consumes a lot less memory|||How does one turn on "partial caching" for a lookup transformation?
|||

Set CacheType="Partial"

CacheType is a property of the LOOKUP transform.

-Jamie

|||Ah, yes. I was looking for it on the Edit dialog, not the property page. I sometimes forget to look in both places.

Thanks,
Dan

Lookup

Hi,

Before inserting records of a recordset I would like to check that each one of these records doesn't exist in the destination table 1. If the records exist, they should be redirect to other table 2.

For this purpose I use a lookup task. The referende table is the destination table 1. The green line is linked to table 2 and the red line is linked to table 1.

The first execution works properly and copies de records to table 1, but on the next executions the records are copied to table 1 instead of being copied to table 2.

Any idea?

Thanks.

What do you mean when you say next executions?

Are you using a for each/next loop or just running the packing over again?

You could try adding a unique index on table 1, skip the lookup, and insert records straight from the source to the destination with error rows to sent to table 2.

|||I mean just running the packing over again.|||

Andrew,

It looks like the Lookup is not finding any match in the 2nd execution. The lookup transformation is case sensitive; check that the CasInG of incoming rows and the rows also match, or use Upper/lower function in both the source and the lookup.

looking value in table

Hello there
I have huge table with at least 20,000,000 records, On that I have Field1
who has unique values.
The same value sometimes shown on the same table on Field10 but not on the
same record and it isn't unique.
i need to build query that returs all the records with Field1Who exist
somewere on Field10.
Any query that i build took more then 1 hour to work.
How can i do it on query that will run fast?Roy, shalom
CREATE TABLE #Test (c1 INT NOT NULL PRIMARY KEY,c2 INT)
INSERT INTO #Test VALUES (1,10)
INSERT INTO #Test VALUES (2,1)
INSERT INTO #Test VALUES (3,4)
INSERT INTO #Test VALUES (4,5)
INSERT INTO #Test VALUES (5,4)
INSERT INTO #Test VALUES (6,7)
INSERT INTO #Test VALUES (7,10)
INSERT INTO #Test VALUES (8,1)
SELECT c1 FROM #Test
WHERE EXISTS (SELECT * FROM #Test T WHERE #Test.c1=T.c2)
Try create index on c1,c2 and see what is going on
"Roy Goldhammer" <roy@.hotmail.com> wrote in message
news:e6sex1PbGHA.3740@.TK2MSFTNGP03.phx.gbl...
> Hello there
> I have huge table with at least 20,000,000 records, On that I have Field1
> who has unique values.
> The same value sometimes shown on the same table on Field10 but not on the
> same record and it isn't unique.
> i need to build query that returs all the records with Field1Who exist
> somewere on Field10.
> Any query that i build took more then 1 hour to work.
> How can i do it on query that will run fast?
>|||From what you had given, field1 is the primary key and it should have been
cluster-indexed. If so, then just do this
and try to execute the query
Create a non-clustered index on Field 10 alone
Hope this helps.
--
"Roy Goldhammer" wrote:

> Hello there
> I have huge table with at least 20,000,000 records, On that I have Field1
> who has unique values.
> The same value sometimes shown on the same table on Field10 but not on the
> same record and it isn't unique.
> i need to build query that returs all the records with Field1Who exist
> somewere on Field10.
> Any query that i build took more then 1 hour to work.
> How can i do it on query that will run fast?
>
>

Saturday, February 25, 2012

Looking for recommended approach to merging records

I am trying to create a dimension table and I am pulling in data from two tables to create it. I need all records from table A, any records from table B that are not in table A, and I need to use the fields from B for those records that do match. What would be the best way to approach this, merge join + derived columns, union all + aggrigation? Any suggestions?

It seems like it's harder to do this in ssis rather then just doing it in the database.

If you think it is easierr to do it in the database then use the database. It is an oft made assumption that if you're using SSIS then you should use it to do EVERYTHING. That's not the case - database engines do what they do fantastically well so if you can use it either by:

Writing SQL in an OLE DB Source

Monday, February 20, 2012

Looking for duplicate and consecutive records in a table

I'm trying to get the LastcallResult = 'Not Home' result that happen more than once but that happen in consecutive order. This cursor displays the records but doesn't grab the occurance one after the other , consecutive order.

If there's a way I can modify this cursor to get the records I want or If there's another process to get the records that have the "not home' filter more than once in consecutive order .

Thanks for any suggestions in advance....

DECLARE get_lastCallresult CURSOR FOR

select count(*), lastCallresult, fn, ln from contacts

where lastCallresult = 'Not Home'

and contactType = 'referral'

and system = 'lead'

--and callback ='Not Home'

group by lastCallresult,fn, ln

having COUNT(*) > 2

OPEN get_lastCallresult

FETCH NEXT FROM get_lastCallresult

WHILE @.@.FETCH_STATUS = 0

FETCH NEXT FROM get_lastCallresult

CLOSE get_lastCallresult

DEALLOCATE get_lastCallresult

Try adding identity() to your select

Code Snippet

select count(*), lastCallresult, fn, ln from contacts, IDENTITY(int, 1,1) as seq

where lastCallresult = 'Not Home'

and contactType = 'referral'

and system = 'lead'

--and callback ='Not Home'

group by lastCallresult,fn, ln

having COUNT(*) > 2

|||

For this kind of questions, it is better to post some DDL, including constraints and indexes, sample data and expected result.

AMB

|||

Are you sure that you require a CURSOR based operation?

I suggest that you consider dropping the CURSOR and just create a #temp TABLE with the query results. Then you can cycle through the temp table if your needs require rowwise operations.

However, if you provided the code that you use on the cursor, we might be able to help you revise the code to do it all in one operation. SQL

Server is best when used for SET based operations -not 'recordset' type operations.

|||

Try adding an ORDER BY clause to your statement. Cursor operations are ordered, but you have to specify the order by for ordering to be guaranteed.

Without more info, no idea if you actually need a cursor, but look into the ROW_NUMBER() function if this is SQL Server 2005. It will let you do a lot of cool things in regards to gettting the last row.

DaleJ, this isn't valid syntax, certainly not in the FROM clause, and not in the SELECT clause without and INTO:

select count(*), lastCallresult, fn, ln

from contacts, IDENTITY(int, 1,1) as seq

where lastCallresult = 'Not Home'

and contactType = 'referral'

and system = 'lead'

--and callback ='Not Home'

group by lastCallresult,fn, ln

having COUNT(*) > 2

|||

Oops, yup, thanks Louis.

I didn't think that all the way through along with not paying attention to where I pasted...

|||

The only reason the I'm trying a cursor is because I was trying to scan thru the rows and find all those rows that are repeated and happen one after the other

i.e. ( this is not the actual table, is something to demostrate)

Name Address Phone DateCalled CalledReason

John 123 Main St 123-1234 01/02/06 Survey

John 123 Main St 123-1234 02/25/06 Survey

The column the has the called reason is the one that I want, is repeated and happen one after the oher.

|||

druiz,

The solution would be finding existing groups, of consecutive rows, with more than one element. If you are using SQL Server 2005, then you can try using the new ranking function ROW_NUMBER.

Code Snippet

create table dbo.t1 (

[Name] varchar(50),

[Address] varchar(50),

Phone char(8),

DateCalled datetime,

CalledReason varchar(25)

)

go

insert into dbo.t1 values('John', '123 Main St', '123-1234', '01/01/06', 'Reason 1')

insert into dbo.t1 values('John', '123 Main St', '123-1234', '01/02/06', 'Survey')

insert into dbo.t1 values('John', '123 Main St', '123-1234', '02/25/06', 'Survey')

insert into dbo.t1 values('John', '123 Main St', '123-1234', '02/26/06', 'Reason 2')

insert into dbo.t1 values('John', '123 Main St', '123-1234', '02/27/06', 'Reason 3')

insert into dbo.t1 values('John', '123 Main St', '123-1234', '02/28/06', 'Reason 3')

insert into dbo.t1 values('Paul', '124 Main St', '123-5678', '01/02/06', 'Reason 1')

insert into dbo.t1 values('Paul', '124 Main St', '123-5678', '01/03/06', 'Reason 2')

go

;with cte

as

(

select

[name], [address], phone, datecalled, calledreason,

row_number() over(partition by [name], [address], phone order by datecalled) -

row_number() over(partition by [name], [address], phone, calledreason order by datecalled) as grp

from

dbo.t1

)

select

[name],

[address],

phone,

calledreason,

min(datecalled) as min_datecalled,

max(datecalled) as max_datecalled

from

cte

group by

[name], [address], phone, calledreason, grp

having

count(*) > 1

go

drop table dbo.t1

go

AMB

|||

This snippet has been really helpful. I tried this same way and it gave the results I'm looking for, I'm just going to make a couple mod's to fit my tables.

Thanks