Friday, March 30, 2012

Loss of Server

I'm not sure if this is a Windows issue or a SQL Server in a cluster issue,
but it's strange.
We have a Windows 2003 Server Enterprise w/SP2 Running in a cluster. The
only thing on these servers is SQL Server. It's behind a firewall with only
the SQL ports open.
The problem we are running into is that from time to time, the primary node
in the SQL Cluster become unresponsive to the public NIC and the heartbeat
NIC and it doesn't failover. You can't RDP to it and the Cluster
administrator doesn't pick it up. You can't even ping the primary or
heartbeat from the passive node. It's like it is just not there.
There is a monitor NIC on this server as well, and they are throwing NO
alarms.
After about 20 minutes, it comes back.
Should this go to the Cluster group? Any suggestions?I would suggest taking it to the Windows Cluster group. I believe SQL runs
on top of the Cluster service, so.
"Kevin A" <kevina@.cqlcorp.com> wrote in message
news:Oo9HQ5FiIHA.4468@.TK2MSFTNGP03.phx.gbl...
> I'm not sure if this is a Windows issue or a SQL Server in a cluster
> issue, but it's strange.
> We have a Windows 2003 Server Enterprise w/SP2 Running in a cluster. The
> only thing on these servers is SQL Server. It's behind a firewall with
> only the SQL ports open.
> The problem we are running into is that from time to time, the primary
> node in the SQL Cluster become unresponsive to the public NIC and the
> heartbeat NIC and it doesn't failover. You can't RDP to it and the
> Cluster administrator doesn't pick it up. You can't even ping the primary
> or heartbeat from the passive node. It's like it is just not there.
> There is a monitor NIC on this server as well, and they are throwing NO
> alarms.
> After about 20 minutes, it comes back.
> Should this go to the Cluster group? Any suggestions?
>

Loss of server

Dear all
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first detached it.
You *might* be able to attach is even if you didn't detached it first, but consider this as one of
those "lucky" situations. It is not guaranteed or documented.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was the
> case. If so, then presumably a SAN would present an even simplier solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is documented. That should
end the discussion.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data was
> lost (service might take longer to resume, but down time in our business is
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>sql

Loss of server

Dear all
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first
detached it.
You *might* be able to attach is even if you didn't detached it first, but c
onsider this as one of
those "lucky" situations. It is not guaranteed or documented.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...[vb
col=seagreen]
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data
.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was th
e
> case. If so, then presumably a SAN would present an even simplier solutio
n,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>[/vbcol]|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is
documented. That should
end the discussion.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...[vb
col=seagreen]
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data wa
s
> lost (service might take longer to resume, but down time in our business i
s
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>[/vbcol]

Loss of records

Hi, I have the following problem
In a flat file I have 250000 rows when I them go on to the DB only 249995 come, 5 got lost

Not where it can spend the mistake the loggind does not say anything of mistake
Not which can be the reason of the problem
If someone save for that it can be spending this?

helps please.

If you execute the package in BIDS you see how many rows are output from each component. This should make it very easy to see where the rows are being lost from.

-Jamie

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last recor
d generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is ge
nerated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
Message posted via http://www.sqlmonster.com
Use the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.c om...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in an MS SQL Server database. These cycles take place approximately once every 10-12 seconds, and there are four stations on the machine, so we are writing approx. 24 records per minute. Our database contains four tables, one for each machine station. Each record contains a unique sequential number generated by the machine control software. Data is logged using SQL INSERT scripts in the application (Wonderware) that Operators use to control the machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operational fault, and stops. This brings up a window on the control screen that requires the Operator to manually enter data, and an UPDATE statement is executed to modify the last record generated. Occasionally when this update is processed, a single record will be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the update for one station was somehow locking an index in the table, we separated the tables so that each station has its own table. Since the station is stopped, no new record is generated for that station until after the update is processed. The other stations can still be running, so they are generating INSERT commands, which could coincide with the UPDATE command. Both commands use the same connection, which is always open.
We still occasionally lose ONE record in one or more of the other tables when the UPDATE executes.
Any thoughts?
--
Message posted via http://www.sqlmonster.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQLMonster.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.sqlmonster.com

Loss of inserted records during/after an insert

We have a system that records a data record for each cycle of a machine in a
n MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per min
ute. Our database contains four tables, one for each machine station. Each
record contains a unique sequential number generated by the machine control
software. Data is logged using SQL INSERT scripts in the application (Wond
erware) that Operators use
to control the machine. (Wonderware script, BTW is not VBA, but is a proprie
tary scripting language.)
Everything works fine, UNTIL the one of the stations encounters an operation
al fault, and stops. This brings up a window on the control screen that req
uires the Operator to manually enter data, and an UPDATE statement is execut
ed to modify the last recor
d generated. Occasionally when this update is processed, a single record wi
ll be lost (never written) in one or more of the data tables.
At first we had all of the records going to one table. Thinking maybe the u
pdate for one station was somehow locking an index in the table, we separate
d the tables so that each station has its own table. Since the station is s
topped, no new record is ge
nerated for that station until after the update is processed. The other sta
tions can still be running, so they are generating INSERT commands, which co
uld coincide with the UPDATE command. Both commands use the same connection,
which is always open.
We still occasionally lose ONE record in one or more of the other tables whe
n the UPDATE executes.
Any thoughts?
Message posted via http://www.droptable.comUse the profiler and watch the sql statements - the most likely culprit is a
logic error within the application. Based on your narrative, I would guess
that the problem lies in the error-handling logic.
"Lee Drendall via droptable.com" <forum@.droptable.com> wrote in message
news:9981fa1e356140a298c4ffa13b629920@.SQ
droptable.com...
> We have a system that records a data record for each cycle of a machine in
an MS SQL Server database. These cycles take place approximately once every
10-12 seconds, and there are four stations on the machine, so we are writing
approx. 24 records per minute. Our database contains four tables, one for
each machine station. Each record contains a unique sequential number
generated by the machine control software. Data is logged using SQL INSERT
scripts in the application (Wonderware) that Operators use to control the
machine. (Wonderware script, BTW is not VBA, but is a proprietary scripting
language.)
> Everything works fine, UNTIL the one of the stations encounters an
operational fault, and stops. This brings up a window on the control screen
that requires the Operator to manually enter data, and an UPDATE statement
is executed to modify the last record generated. Occasionally when this
update is processed, a single record will be lost (never written) in one or
more of the data tables.
> At first we had all of the records going to one table. Thinking maybe the
update for one station was somehow locking an index in the table, we
separated the tables so that each station has its own table. Since the
station is stopped, no new record is generated for that station until after
the update is processed. The other stations can still be running, so they
are generating INSERT commands, which could coincide with the UPDATE
command. Both commands use the same connection, which is always open.
> We still occasionally lose ONE record in one or more of the other tables
when the UPDATE executes.
> Any thoughts?
> --
> Message posted via http://www.droptable.comsql

Loss of Decimals Upon Link to Access

Hi. I have an Access DB that's linked to a SQL DB view. The SQL view is base
d
on a table which has some data types as float. I created a view on the table
.
The view shows me units of a product divided by units of all products. The
results are expressed in the view as decimals. So, for example, .499857. Thi
s
is what I want. However, when I link the view to Access, all of my decimals
become zero. For example, .499857 becomes 0. I'm completely confounded. Any
suggestions would be fantastic! Thanks!If the SQL view is correct and you can use Query Analyzer to
view the results and they are as expected then this is more
of any MS Access issue. Make sure you have the latest Jet
service pack installed on the client.
But this is more of an Access issue so you would want to try
posting in one of the Access newsgroups. When posting your
question, be sure to include versions (version of SQL
Server, version of Access), what service packs you are
using.
-Sue
On Wed, 19 Apr 2006 08:11:02 -0700, Mike C
<MikeC@.discussions.microsoft.com> wrote:

>Hi. I have an Access DB that's linked to a SQL DB view. The SQL view is bas
ed
>on a table which has some data types as float. I created a view on the tabl
e.
>The view shows me units of a product divided by units of all products. The
>results are expressed in the view as decimals. So, for example, .499857. Th
is
>is what I want. However, when I link the view to Access, all of my decimals
>become zero. For example, .499857 becomes 0. I'm completely confounded. Any
>suggestions would be fantastic! Thanks!

Loss of data due to conflict

hi all,
i installed merge replication successfully,after that i tried to add a row from publisher with id 2004 (it is primary key and autogenerated column)and different other columns,same like that i inserted a row from subscriber with id as 2004 and different ot
her column.when i observed after merge agent is successfull only one row is replicated the other row is failed to replicate due to conflict.this causing loss of data.please advise what i have to do to get data from both sides.
thanks®ards,
reddy
Reddy,
with merge, if you have identity columns as the PK, you need to partition
according to publisher and subscriber ie each uses its own range. Before
initialization, the publisher PK is set to be "Identity Yes (Not for
Replication)" and SQL Server will manage the seeds on publisher and
subscriber and you can define when a new seed is allocated. In your case
this doesn't seem to be happening, presumably because it is a manual setup?
If this is so, you'll need to partition the identity values yourself. Here
is an article which should help you:
http://www.mssqlserver.com/replicati...h_identity.asp
HTH,
Paul Ibison
|||paul,
thank you very much for your information.
but if i set different ranges both on publisher and subscriber the sequence will be broken.is there anyother way you would like to suggest.
thanks®ards
chandra
|||In merge. there is no other way to partition on one single PK-identity column and avoid identity conflicts, as this would mean the subscriber needs to be in contact at all times with the publisher (zero autonomy). This is possible in transactional with im
mediate updating subscribers, as the publisher itself controlls all identity values, even those on the subscriber.
As an alternative, you could make your PK 2 columns with one of them as the Site Identifier, while the other is an identity column. In this way duplicate identity values could be added and this wouldn't result in a conflict.
HTH,
Paul Ibison
|||paul,
thank you very much for your information.i go for second option that is pk 2 columns with one of them as site identifier.
i think it will works fine for my requirement.
thanks®ards
reddy

Loss of connection to linked servers -- Please help

All,
SQL 2000, sp3, Server 2000 sp4
I have 3 servers, they are all set up as linked servers. The link to the
other servers works then all of a sudden you can't see the other
servers. Has anyone ever seen this?
I have even deleted the info in the client network utility and tried to
re-register the servers, to no avail.
Please help.
Thanks All,
snyper
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!hi,
mostly linked to physical network factor.
check that part out.
thanks
rahul
>--Original Message--
>All,
>SQL 2000, sp3, Server 2000 sp4
>I have 3 servers, they are all set up as linked servers.
The link to the
>other servers works then all of a sudden you can't see
the other
>servers. Has anyone ever seen this?
>I have even deleted the info in the client network
utility and tried to
>re-register the servers, to no avail.
>Please help.
>Thanks All,
>snyper
>*** Sent via Developersdex http://www.developersdex.com
***
>Don't just participate in USENET...get rewarded for it!
>.
>sql

Loss of Connection

We have an application running on a server that does a connection
check on it's connection to the database on a SQL Server 2000.
Sometimes it looses it's connection and then is unable to restablish
the connection for over an hour. During the time that it looses
contact with the SQL Server there is some pretty heavy activity on the
SQL Server 2000 box.
Is there some setting I've overlooked ... or is this some weakness on
the part of SQL Server ? I don't think the application is doing a
query or anything, I think it's just some heartbeat kind of routine.
rls
Seattle, WAare you attaching via name or IP address? Without a WINS, DNS or ADS server,
attaching via name may be unrealizable. Try using the IP address.
--
J
www.urbanvoyeur.com
"brlarue" <ron.strouss@.westfarm.com> wrote in message
news:42b547894434e770528406949d17c5b5@.news.teranews.com...
> We have an application running on a server that does a connection
> check on it's connection to the database on a SQL Server 2000.
> Sometimes it looses it's connection and then is unable to restablish
> the connection for over an hour. During the time that it looses
> contact with the SQL Server there is some pretty heavy activity on the
> SQL Server 2000 box.
> Is there some setting I've overlooked ... or is this some weakness on
> the part of SQL Server ? I don't think the application is doing a
> query or anything, I think it's just some heartbeat kind of routine.
> rls
> Seattle, WA|||We have a DNS. I'll take a look at the possibility of using the IP
address. Here is the message coming from the application that looses
it's connection.
GENTRAN Notification: ConvertedNotification3 Oct 05 2003 07:29:20
EventID=55867 1-1-50009:ODBC: MFC database exception in
Program/RETCODE: Edimgr/-1State:08S01,Native:0,Origin:[Microsoft][ODBC
SQL Server Driver]
Communication link failure
-
On Mon, 6 Oct 2003 06:26:59 -0400, "UrbanVoyeur" <nospam@.nospam.com>
wrote:
>are you attaching via name or IP address? Without a WINS, DNS or ADS server,
>attaching via name may be unrealizable. Try using the IP address.

Loss of Column properties when exporting with DTS

Exporting a sql 2000 database from one sql 2000 server to another. I am using DTS import/export Wizard. The data transfers fine but column properties such as identity = yes or default value = (getdate()) are lost in transfer. What am I missing??May check this DB JOurnal (http://www.databasejournal.com/features/mssql/article.php/1499481) article.

Losing umlauts in SQL Templates

Hi there,

does anybody know why I'm losing all umlauts whenever I drag a custom template to a script? Double-clicking works fine, but I definitely want to avoid a unproductive workaround like double click > mark all >copy > paste in other script.

We work a lot with shared templates and we cannot avoid using umlauts in the scripts.

Any expericences or suggestions?

Thanks in advance

Thomas

Sounds like your local windows regional settings might not be right for you. Try changing the input language to German and see if that helps.

|||We once has that while using the wrong encoding for our scripts. We finally decided to take the unicode format which solved the problems.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||I already use German as input language. Sorry, but this does not seem to help.|||Thanks for your answer. Can you give me a hint, where I could change the encoding?|||

Savinf as file in SQL MS or VS, use the Save as... then use the small little arrow / combobox in the file dialog within "save as" to specify the encoding.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

Excellent, it works! Thank's a lot, Jens!

Losing temporal tables

Hi, i want to know if there's any particular reason why a temporal
table can be dropped before closing the session.
I'm having the following problem: i create the temp table when a form
of my application is created, work with it and then drop it on the
form's close event.
It works fine most of the time, but from time to time the table seems
to be dropped before i close the form because i'm having a: #MyTable
doesn't exists error message.
Any ideas?
Working against SQL2K, W2K Server, from a W2K Pro machine.
Thanx
If your connection is getting dropped at any time you will loose the temp
table. Temp tables are really designed for a brief life span. If you need
to hold certain information for long periods of time like that you may want
to consider using real tables. Or better yet maybe a RS on the client.
Andrew J. Kelly SQL MVP
"Guillermo Casta?o A" <guillermoc74@.hotmail.com> wrote in message
news:9350d78d.0409170946.33db2291@.posting.google.c om...
> Hi, i want to know if there's any particular reason why a temporal
> table can be dropped before closing the session.
> I'm having the following problem: i create the temp table when a form
> of my application is created, work with it and then drop it on the
> form's close event.
> It works fine most of the time, but from time to time the table seems
> to be dropped before i close the form because i'm having a: #MyTable
> doesn't exists error message.
> Any ideas?
> Working against SQL2K, W2K Server, from a W2K Pro machine.
> Thanx

Losing Temp Tables Between Tasks

I have a control flow setup with 5 tasks.

1. Create Temp Table (Execute SQL)
2. Load Temp Table (Execute SQL)
3. Data Flow Task using the temp table
4. Data Flow Task using the temp table
5. Drop Temp Table (Execute SQL)

Tasks 3 & 4 are currently setup to run parallel/concurrently - meaning they both branch directly off of task #2. They both also lead to task #5. Also, my connection manager is set to retain its connection.

Here's the problem. When executing this flow, task #3 will sometimes fail, other times it will succeed - without me changing anything between runs (I simply end debugging and run it again). When it does fail it is always because it cannot find the temp table. One thing to note is that task #4 (running at the same time as #3 and using the same temp table) always finishes first because it does quite a bit less work. Last thing to note: if I set up task 4 to run after 3 it works everytime, but of course it takes much longer for the total package to execute.

How is it that the task can find the temp table on some runs, but not on others? If this is just quirky behavior that I am going to have to live with for the time being, is there a way to force a task to re-run X number of times before giving up and moving on?

Thanks,
David Martin

Temp tables are only guaranteed to remain in existence as long as they are referenced. Is it possible that the connection is being dropped and reopened by SSIS for some reason between tasks?

Or that there is some time during the processing where Sql Server cannot detect the temp table is still being referrenced, so drops it even if the connection is still open?

Seems like using temp tables is risky if you don't have full and explicit control over their existence. I want to use them also but not if I cannot guarantee they will be there when I need them!

|||

Make sure the "RetainSameConnection" property is set to "True" for the connection. This will allow the Execute SQL tasks to share the connection so the temp table can be accessed.

Frank

sql

losing some results

i have created a report that fits the layout to achieve the fields that i
require, i then created an aspx page where my users can select any number of
fields and values to use in the where clause of the sql statement. My aspx
page then builds an sql statement based on these selections and passes this
sql statement to the report as a parameter. The report calls a stored
procedure that executes the sql statement passed in. This works great for
all but one situation that i have found. When a user enters '%bel%' to use
in the where clause for some reason when it gets to reporting services
report the sql statement is modified to 'l%'. Dropping the '%be'. Is
'%be' a reserved command.
example:
if my table had the following entries in a column name city Boston,
Belville,New York,Detroit,Los Angeles, Lakeville
my user wants to find all cities that have 'bel' in the name
the resulting sql would be select city from table where city like '%bel%'
i setup up my report to show the parameters when the aspx page redirects to
the report using the url of the report
the sql that shows up in the parameter field is select city from table where
city like 'l%'
Any help would be appreciated.
Thank youSolved my own problem. what i had to do was replace all my '%' to '%25' to
encode my url before i issued a response.redirect.
"Mike" <mike.no.spam.please@.no.spam.com> wrote in message
news:u2LsSw6tEHA.1596@.TK2MSFTNGP10.phx.gbl...
>i have created a report that fits the layout to achieve the fields that i
>require, i then created an aspx page where my users can select any number
>of fields and values to use in the where clause of the sql statement. My
>aspx page then builds an sql statement based on these selections and passes
>this sql statement to the report as a parameter. The report calls a stored
>procedure that executes the sql statement passed in. This works great for
>all but one situation that i have found. When a user enters '%bel%' to use
>in the where clause for some reason when it gets to reporting services
>report the sql statement is modified to 'l%'. Dropping the '%be'. Is
>'%be' a reserved command.
> example:
> if my table had the following entries in a column name city Boston,
> Belville,New York,Detroit,Los Angeles, Lakeville
> my user wants to find all cities that have 'bel' in the name
> the resulting sql would be select city from table where city like '%bel%'
>
> i setup up my report to show the parameters when the aspx page redirects
> to the report using the url of the report
> the sql that shows up in the parameter field is select city from table
> where city like 'l%'
> Any help would be appreciated.
> Thank you
>

Losing some data

We had problems with our ERP application since the log file of the SQL serve
r
was full. We only have 1 database in the server. We use Drive D:\ for the lo
g
file. I moved one file to C:\ drive to free some space. This file has nothin
g
to do with the database. Then, we restarted the server. After the server was
up again, I noticed weird things such as:
1. log file (ldf) went down from 9 GB to 2 MB
2. we lost some data in some tables.
My question is:
1. What caused those weird sthings?
2. After the server was up, users have been entering/processing new data. Is
it possible to restore data from the last backup before the server had
problem without deleting the new data?
We don't have a DBA here so we don't have any clue on what to do.
Thanks for replying.Hi
"lwidjaya" wrote:

> We had problems with our ERP application since the log file of the SQL ser
ver
> was full. We only have 1 database in the server. We use Drive D:\ for the
log
> file. I moved one file to C:\ drive to free some space. This file has noth
ing
> to do with the database. Then, we restarted the server. After the server w
as
> up again, I noticed weird things such as:
> 1. log file (ldf) went down from 9 GB to 2 MB
> 2. we lost some data in some tables.
> My question is:
> 1. What caused those weird sthings?
> 2. After the server was up, users have been entering/processing new data.
Is
> it possible to restore data from the last backup before the server had
> problem without deleting the new data?
> We don't have a DBA here so we don't have any clue on what to do.
> Thanks for replying.
When you restarted the server did you do anything else? You should not have
lost any committed data, but uncomitted data may be rolled back.
If you have entered new data it may be better to go back to the old backup
and re-enter it. If you have space then the last backup can be restored as a
database with a different name, and you could use a tool such as SQL Data
Compare or dbghost to see what has changed and then make your decission. You
could do this first on a different server if there is not enough room on the
live environment. These tools do have options to insert the differences from
one database to the other, but if you have triggers and other actions
performed you may need to disable them and be very careful how you add the
information.
The longer you let users add/change data then the harder it will be to go
back.
HTH
John|||Hi John,
thanks for your reply.
I checked the database and since we just had this database since November
2006, we've never done any transaction log backup on it. Is doing back up
using veritas the same as doing backup from EM? We do file backup everynight
using veritas.
I'm planning to run transaction log backup from EM after the scheduled
backup from veritas, will it do any harm to the database? Should we set up
transaction log back up every hour? Is it true that transaction log back up
will shrink log file?
Thanks in advance,
Lisa
"John Bell" wrote:
> When you restarted the server did you do anything else? You should not hav
e
> lost any committed data, but uncomitted data may be rolled back.
> If you have entered new data it may be better to go back to the old backup
> and re-enter it. If you have space then the last backup can be restored as
a
> database with a different name, and you could use a tool such as SQL Data
> Compare or dbghost to see what has changed and then make your decission. Y
ou
> could do this first on a different server if there is not enough room on t
he
> live environment. These tools do have options to insert the differences fr
om
> one database to the other, but if you have triggers and other actions
> performed you may need to disable them and be very careful how you add the
> information.
> The longer you let users add/change data then the harder it will be to go
> back.
> HTH
> John|||Hi Lisa
Veritas is probably using an agent to backup the database, it will
effectively doing a full backup. You could set up your own schedule to do
transaction log backups to disc. My preference is to backup to disc and then
to tape, that way you can always have the most recent backups at hand if you
wish to quickly restore the database. There is a period between doing the
disc backup and putting it onto tape, but if you use a raid disc array this
is reduced.
Transaction log backups themselves will not shrink the file. It does enable
the transaction log file to be re-used so it will limit the size of the file
under normal workloads, and the transaction log should only grow if you have
an abnormally large number of changes. Continually shrinking the data and lo
g
files is not a good idea as it can lead to disc fragmentation of the file
which will impact on performance. Your database should be in full recovery
mode.
John
"lwidjaya" wrote:
[vbcol=seagreen]
> Hi John,
> thanks for your reply.
> I checked the database and since we just had this database since November
> 2006, we've never done any transaction log backup on it. Is doing back up
> using veritas the same as doing backup from EM? We do file backup everynig
ht
> using veritas.
> I'm planning to run transaction log backup from EM after the scheduled
> backup from veritas, will it do any harm to the database? Should we set up
> transaction log back up every hour? Is it true that transaction log back u
p
> will shrink log file?
> Thanks in advance,
> Lisa
> "John Bell" wrote:|||Hi John,
We're running transaction log backup every hour now. As you said, the
transaction log backup didn't shrink the ldf file. Is it ok if we shrink the
database once a while?
Thanks,
Lisa
"John Bell" wrote:
[vbcol=seagreen]
> Hi Lisa
> Veritas is probably using an agent to backup the database, it will
> effectively doing a full backup. You could set up your own schedule to do
> transaction log backups to disc. My preference is to backup to disc and th
en
> to tape, that way you can always have the most recent backups at hand if y
ou
> wish to quickly restore the database. There is a period between doing the
> disc backup and putting it onto tape, but if you use a raid disc array thi
s
> is reduced.
> Transaction log backups themselves will not shrink the file. It does enabl
e
> the transaction log file to be re-used so it will limit the size of the fi
le
> under normal workloads, and the transaction log should only grow if you ha
ve
> an abnormally large number of changes. Continually shrinking the data and
log
> files is not a good idea as it can lead to disc fragmentation of the file
> which will impact on performance. Your database should be in full recovery
> mode.
> John
> "lwidjaya" wrote:
>|||Hi Lisa
It is not a good idea to shrink any of the database files as this can lead
to disc fragmentation and potential degredation of performance. See
http://www.karaszi.com/SQLServer/info_dont_shrink.asp, the only exception
may be if you have done something very abnormal, such as mass data
migration/upgrade.
If you make sure that if you have auto expansion on, then the value should
not be a percentage. Alternatively you could turn expansion off and expand
manually when it is a quiet period.
John
"lwidjaya" <lwidjaya@.discussions.microsoft.com> wrote in message
news:3DAF644A-9416-4F05-9986-8A1D929FA0DD@.microsoft.com...[vbcol=seagreen]
> Hi John,
> We're running transaction log backup every hour now. As you said, the
> transaction log backup didn't shrink the ldf file. Is it ok if we shrink
> the
> database once a while?
> Thanks,
> Lisa
> "John Bell" wrote:
>|||> We're running transaction log backup every hour now. As you said, the
> transaction log backup didn't shrink the ldf file. Is it ok if we shrink
> the
> database once a while?
Why? Do you once in a while lease that disk space to some other process?
http://www.karaszi.com/SQLServer/info_dont_shrink.asp|||No, we don't lease the space to other process. We had ldf file less than 1 G
B
before, then our consultant did 'tables reorganizing' and the ldf suddenly
grew to 13.5 GB. We only have 2 GB free space now for the log file. So, I'm
wondering if we can shrink it to get more space, just in case we need the
space in the future.
I have a question, in database property/Taskpad, it shows that the
transaction log space is 13.5 GB, used: 76 MB, and free: 13.4 GB. But how
come the ldf file size shows 13.5 GB? Does it mean the actual size of the ld
f
file is only 76 MB if we shrink it?
Thanks for the replies!
"Aaron Bertrand [SQL Server MVP]" wrote:

> Why? Do you once in a while lease that disk space to some other process?
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>
>|||> No, we don't lease the space to other process. We had ldf file less than 1
> GB
> before, then our consultant did 'tables reorganizing' and the ldf suddenly
> grew to 13.5 GB.
Well, that's a special case, and it fits under John Bell's comment
(something very abnormal). I assume your consultant does not reorganize
tables daily?

> I have a question, in database property/Taskpad, it shows that the
> transaction log space is 13.5 GB, used: 76 MB, and free: 13.4 GB. But how
> come the ldf file size shows 13.5 GB? Does it mean the actual size of the
> ldf
> file is only 76 MB if we shrink it?
No, that is not necessarily true. There are multiple virtual log files, and
how the shrink will physically change the physical log file depends on where
any active transactions are stored in the log file.
A|||No, our consultant only did it one time because we made changes to our ERP
data.
So, I guess we can do the database shrink since it's an 'abnormal case'?
Thanks.
"Aaron Bertrand [SQL Server MVP]" wrote:

> Well, that's a special case, and it fits under John Bell's comment
> (something very abnormal). I assume your consultant does not reorganize
> tables daily?
>
> No, that is not necessarily true. There are multiple virtual log files, a
nd
> how the shrink will physically change the physical log file depends on whe
re
> any active transactions are stored in the log file.
> A
>
>

Losing some data

We had problems with our ERP application since the log file of the SQL server
was full. We only have 1 database in the server. We use Drive D:\ for the log
file. I moved one file to C:\ drive to free some space. This file has nothing
to do with the database. Then, we restarted the server. After the server was
up again, I noticed weird things such as:
1. log file (ldf) went down from 9 GB to 2 MB
2. we lost some data in some tables.
My question is:
1. What caused those weird sthings?
2. After the server was up, users have been entering/processing new data. Is
it possible to restore data from the last backup before the server had
problem without deleting the new data?
We don't have a DBA here so we don't have any clue on what to do.
Thanks for replying.
Hi
"lwidjaya" wrote:

> We had problems with our ERP application since the log file of the SQL server
> was full. We only have 1 database in the server. We use Drive D:\ for the log
> file. I moved one file to C:\ drive to free some space. This file has nothing
> to do with the database. Then, we restarted the server. After the server was
> up again, I noticed weird things such as:
> 1. log file (ldf) went down from 9 GB to 2 MB
> 2. we lost some data in some tables.
> My question is:
> 1. What caused those weird sthings?
> 2. After the server was up, users have been entering/processing new data. Is
> it possible to restore data from the last backup before the server had
> problem without deleting the new data?
> We don't have a DBA here so we don't have any clue on what to do.
> Thanks for replying.
When you restarted the server did you do anything else? You should not have
lost any committed data, but uncomitted data may be rolled back.
If you have entered new data it may be better to go back to the old backup
and re-enter it. If you have space then the last backup can be restored as a
database with a different name, and you could use a tool such as SQL Data
Compare or dbghost to see what has changed and then make your decission. You
could do this first on a different server if there is not enough room on the
live environment. These tools do have options to insert the differences from
one database to the other, but if you have triggers and other actions
performed you may need to disable them and be very careful how you add the
information.
The longer you let users add/change data then the harder it will be to go
back.
HTH
John
|||Hi John,
thanks for your reply.
I checked the database and since we just had this database since November
2006, we've never done any transaction log backup on it. Is doing back up
using veritas the same as doing backup from EM? We do file backup everynight
using veritas.
I'm planning to run transaction log backup from EM after the scheduled
backup from veritas, will it do any harm to the database? Should we set up
transaction log back up every hour? Is it true that transaction log back up
will shrink log file?
Thanks in advance,
Lisa
"John Bell" wrote:
> When you restarted the server did you do anything else? You should not have
> lost any committed data, but uncomitted data may be rolled back.
> If you have entered new data it may be better to go back to the old backup
> and re-enter it. If you have space then the last backup can be restored as a
> database with a different name, and you could use a tool such as SQL Data
> Compare or dbghost to see what has changed and then make your decission. You
> could do this first on a different server if there is not enough room on the
> live environment. These tools do have options to insert the differences from
> one database to the other, but if you have triggers and other actions
> performed you may need to disable them and be very careful how you add the
> information.
> The longer you let users add/change data then the harder it will be to go
> back.
> HTH
> John
|||Hi Lisa
Veritas is probably using an agent to backup the database, it will
effectively doing a full backup. You could set up your own schedule to do
transaction log backups to disc. My preference is to backup to disc and then
to tape, that way you can always have the most recent backups at hand if you
wish to quickly restore the database. There is a period between doing the
disc backup and putting it onto tape, but if you use a raid disc array this
is reduced.
Transaction log backups themselves will not shrink the file. It does enable
the transaction log file to be re-used so it will limit the size of the file
under normal workloads, and the transaction log should only grow if you have
an abnormally large number of changes. Continually shrinking the data and log
files is not a good idea as it can lead to disc fragmentation of the file
which will impact on performance. Your database should be in full recovery
mode.
John
"lwidjaya" wrote:
[vbcol=seagreen]
> Hi John,
> thanks for your reply.
> I checked the database and since we just had this database since November
> 2006, we've never done any transaction log backup on it. Is doing back up
> using veritas the same as doing backup from EM? We do file backup everynight
> using veritas.
> I'm planning to run transaction log backup from EM after the scheduled
> backup from veritas, will it do any harm to the database? Should we set up
> transaction log back up every hour? Is it true that transaction log back up
> will shrink log file?
> Thanks in advance,
> Lisa
> "John Bell" wrote:
|||Hi John,
We're running transaction log backup every hour now. As you said, the
transaction log backup didn't shrink the ldf file. Is it ok if we shrink the
database once a while?
Thanks,
Lisa
"John Bell" wrote:
[vbcol=seagreen]
> Hi Lisa
> Veritas is probably using an agent to backup the database, it will
> effectively doing a full backup. You could set up your own schedule to do
> transaction log backups to disc. My preference is to backup to disc and then
> to tape, that way you can always have the most recent backups at hand if you
> wish to quickly restore the database. There is a period between doing the
> disc backup and putting it onto tape, but if you use a raid disc array this
> is reduced.
> Transaction log backups themselves will not shrink the file. It does enable
> the transaction log file to be re-used so it will limit the size of the file
> under normal workloads, and the transaction log should only grow if you have
> an abnormally large number of changes. Continually shrinking the data and log
> files is not a good idea as it can lead to disc fragmentation of the file
> which will impact on performance. Your database should be in full recovery
> mode.
> John
> "lwidjaya" wrote:
|||Hi Lisa
It is not a good idea to shrink any of the database files as this can lead
to disc fragmentation and potential degredation of performance. See
http://www.karaszi.com/SQLServer/info_dont_shrink.asp, the only exception
may be if you have done something very abnormal, such as mass data
migration/upgrade.
If you make sure that if you have auto expansion on, then the value should
not be a percentage. Alternatively you could turn expansion off and expand
manually when it is a quiet period.
John
"lwidjaya" <lwidjaya@.discussions.microsoft.com> wrote in message
news:3DAF644A-9416-4F05-9986-8A1D929FA0DD@.microsoft.com...[vbcol=seagreen]
> Hi John,
> We're running transaction log backup every hour now. As you said, the
> transaction log backup didn't shrink the ldf file. Is it ok if we shrink
> the
> database once a while?
> Thanks,
> Lisa
> "John Bell" wrote:
|||> We're running transaction log backup every hour now. As you said, the
> transaction log backup didn't shrink the ldf file. Is it ok if we shrink
> the
> database once a while?
Why? Do you once in a while lease that disk space to some other process?
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
|||No, we don't lease the space to other process. We had ldf file less than 1 GB
before, then our consultant did 'tables reorganizing' and the ldf suddenly
grew to 13.5 GB. We only have 2 GB free space now for the log file. So, I'm
wondering if we can shrink it to get more space, just in case we need the
space in the future.
I have a question, in database property/Taskpad, it shows that the
transaction log space is 13.5 GB, used: 76 MB, and free: 13.4 GB. But how
come the ldf file size shows 13.5 GB? Does it mean the actual size of the ldf
file is only 76 MB if we shrink it?
Thanks for the replies!
"Aaron Bertrand [SQL Server MVP]" wrote:

> Why? Do you once in a while lease that disk space to some other process?
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp
>
>
|||> No, we don't lease the space to other process. We had ldf file less than 1
> GB
> before, then our consultant did 'tables reorganizing' and the ldf suddenly
> grew to 13.5 GB.
Well, that's a special case, and it fits under John Bell's comment
(something very abnormal). I assume your consultant does not reorganize
tables daily?

> I have a question, in database property/Taskpad, it shows that the
> transaction log space is 13.5 GB, used: 76 MB, and free: 13.4 GB. But how
> come the ldf file size shows 13.5 GB? Does it mean the actual size of the
> ldf
> file is only 76 MB if we shrink it?
No, that is not necessarily true. There are multiple virtual log files, and
how the shrink will physically change the physical log file depends on where
any active transactions are stored in the log file.
A
|||No, our consultant only did it one time because we made changes to our ERP
data.
So, I guess we can do the database shrink since it's an 'abnormal case'?
Thanks.
"Aaron Bertrand [SQL Server MVP]" wrote:

> Well, that's a special case, and it fits under John Bell's comment
> (something very abnormal). I assume your consultant does not reorganize
> tables daily?
>
> No, that is not necessarily true. There are multiple virtual log files, and
> how the shrink will physically change the physical log file depends on where
> any active transactions are stored in the log file.
> A
>
>

Losing server means data loss even when transaction log is unhurt?

Hi,
I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it is
possible to switch over to the secondary log shipping server and recover up
to the point of failure when the primary data file has failed. But what
happens in all the other failure scenarios when the production log shipping
server is gone, the transaction log file is still available but it's
impossible to backup the last transaction log with the NO_TRUNCATE option
(since the server itself isn't running and probably the master database is
damaged)? Am I bound to lose all the transactions since the last transaction
log backup?
-- Thanks, Oskar.
Oscar
If you cannot run BACKUP LOG on production , you will have to run something
like that
RESTORE DATABASE database_name WITH RECOVERY
EXEC SP_DBOPTION 'database_name', 'read only', 'false'
"Oskar" <Oskar@.discussions.microsoft.com> wrote in message
news:BCCD71E3-BB89-4BEE-B601-03E58ECBFA79@.microsoft.com...
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it
> is
> possible to switch over to the secondary log shipping server and recover
> up
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log
> shipping
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last
> transaction
> log backup?
> -- Thanks, Oskar.
>
|||Oskar wrote:
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it is
> possible to switch over to the secondary log shipping server and recover up
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log shipping
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last transaction
> log backup?
> -- Thanks, Oskar.
>
Yes, you will lose anything that occurred after the last log backup.
Backup as frequently as necessary to minimize the damage - if you can't
afford to lose 15 minutes of data, backup every 5 minutes.
Bringing the standby database online is as simple as running
RESTORE DATABASE standbyDBName WITH RECOVERY
Tracy McKibben
MCDBA
http://www.realsqlguy.com
|||I think this will answer some of you questions (if I understand the issue
correctly)
http://msdn2.microsoft.com/en-us/library/ms179314.aspx
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Oskar" <Oskar@.discussions.microsoft.com> wrote in message
news:BCCD71E3-BB89-4BEE-B601-03E58ECBFA79@.microsoft.com...
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it
> is
> possible to switch over to the secondary log shipping server and recover
> up
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log
> shipping
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last
> transaction
> log backup?
> -- Thanks, Oskar.
>
|||Roger, thank you. Unfortunately this isn't what I'm after. Basically I wanted
to know if it's still possible to recover up to the point of failure in cases
when primary data file of a database and the server to which it was attached
are gone but the transaction log of the database is still intact. If that
happens there is no way I can issue a BACKUP LOG ... WITH NO_TRUNCATE (or any
other command) on the server because it's gone. Also mind that I don't have
MS SQL Server 2005 but 2000.
-- Thanks, Oskar
"Roger Wolter[MSFT]" wrote:

> I think this will answer some of you questions (if I understand the issue
> correctly)
> http://msdn2.microsoft.com/en-us/library/ms179314.aspx
>
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Oskar" <Oskar@.discussions.microsoft.com> wrote in message
> news:BCCD71E3-BB89-4BEE-B601-03E58ECBFA79@.microsoft.com...
>
>
|||But you must have a server someplace right? Your log shipping destination?
Can't you do the backup log command from there? Maybe I'm missing something
here.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Oskar" <Oskar@.discussions.microsoft.com> wrote in message
news:DCD111CF-E527-4C22-81EE-E84027ADDCD2@.microsoft.com...[vbcol=seagreen]
> Roger, thank you. Unfortunately this isn't what I'm after. Basically I
> wanted
> to know if it's still possible to recover up to the point of failure in
> cases
> when primary data file of a database and the server to which it was
> attached
> are gone but the transaction log of the database is still intact. If that
> happens there is no way I can issue a BACKUP LOG ... WITH NO_TRUNCATE (or
> any
> other command) on the server because it's gone. Also mind that I don't
> have
> MS SQL Server 2005 but 2000.
> -- Thanks, Oskar
> "Roger Wolter[MSFT]" wrote:
|||Here's the KB http://support.microsoft.com/kb/253817/en-us
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:OJr4L3yOHHA.4644@.TK2MSFTNGP03.phx.gbl...
> Yes, that should be possible:
> On some healthy SQL Server, you create a new database. Stop that SQL
> Server. Delete the two database files. "Slide" in your log file (ldf) from
> the production SQL Server where the log file were for this newly created
> database. Start this new SQL Server. Do the log backup (with NO_TRUNCATE).
> I believe that there's a KB describing this (search and you should find),
> but the steps are pretty straight forward.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Oskar" <Oskar@.discussions.microsoft.com> wrote in message
> news:DCD111CF-E527-4C22-81EE-E84027ADDCD2@.microsoft.com...
>
|||Roger,
Log shipping destination of course would be available and I would be able to
do log backups there. The point is that log shipping destination would be
behind log shipping source (i.e. production database) in regard to the latest
transactions that happened between the time last backup was made on the
source and copied to the destination and the time of failure of the source.
So if the source is lost and I'm not able to make the last backup of those
transactions (with NO_TRUNCATE option), because the server itself is also
nonfunctional, then I'm losing those transactions which is unacceptable.
Sorry Roger, I can't explain it any better. Tibor seems to have got the point.
-- Thanks, Oskar.
"Roger Wolter[MSFT]" wrote:

> But you must have a server someplace right? Your log shipping destination?
> Can't you do the backup log command from there? Maybe I'm missing something
> here.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Oskar" <Oskar@.discussions.microsoft.com> wrote in message
> news:DCD111CF-E527-4C22-81EE-E84027ADDCD2@.microsoft.com...
>
>
|||Thanks Roger. I think this is the one.
"Roger Wolter[MSFT]" wrote:

> Here's the KB http://support.microsoft.com/kb/253817/en-us
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
> message news:OJr4L3yOHHA.4644@.TK2MSFTNGP03.phx.gbl...
>
>
sql

Losing server means data loss even when transaction log is unhurt?

Hi,
I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it is
possible to switch over to the secondary log shipping server and recover up
to the point of failure when the primary data file has failed. But what
happens in all the other failure scenarios when the production log shipping
server is gone, the transaction log file is still available but it's
impossible to backup the last transaction log with the NO_TRUNCATE option
(since the server itself isn't running and probably the master database is
damaged)? Am I bound to lose all the transactions since the last transaction
log backup?
-- Thanks, Oskar.Oscar
If you cannot run BACKUP LOG on production , you will have to run something
like that
RESTORE DATABASE database_name WITH RECOVERY
EXEC SP_DBOPTION 'database_name', 'read only', 'false'
"Oskar" <Oskar@.discussions.microsoft.com> wrote in message
news:BCCD71E3-BB89-4BEE-B601-03E58ECBFA79@.microsoft.com...
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it
> is
> possible to switch over to the secondary log shipping server and recover
> up
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log
> shipping
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last
> transaction
> log backup?
> -- Thanks, Oskar.
>|||Oskar wrote:
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it
is
> possible to switch over to the secondary log shipping server and recover u
p
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log shippin
g
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last transacti
on
> log backup?
> -- Thanks, Oskar.
>
Yes, you will lose anything that occurred after the last log backup.
Backup as frequently as necessary to minimize the damage - if you can't
afford to lose 15 minutes of data, backup every 5 minutes.
Bringing the standby database online is as simple as running
RESTORE DATABASE standbyDBName WITH RECOVERY
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||I think this will answer some of you questions (if I understand the issue
correctly)
http://msdn2.microsoft.com/en-us/library/ms179314.aspx
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Oskar" <Oskar@.discussions.microsoft.com> wrote in message
news:BCCD71E3-BB89-4BEE-B601-03E58ECBFA79@.microsoft.com...
> Hi,
> I have log shipping with two MS SQL Servers 2000 SP3. The BOL say that it
> is
> possible to switch over to the secondary log shipping server and recover
> up
> to the point of failure when the primary data file has failed. But what
> happens in all the other failure scenarios when the production log
> shipping
> server is gone, the transaction log file is still available but it's
> impossible to backup the last transaction log with the NO_TRUNCATE option
> (since the server itself isn't running and probably the master database is
> damaged)? Am I bound to lose all the transactions since the last
> transaction
> log backup?
> -- Thanks, Oskar.
>

Losing rows from file to destination table - need troubleshooting help

I am losing data from time to time and can not figure out where the rows are vanishing to. In essence, I have 5 files that I process. The processing occurs on a daily basis by one SQLAgent job that calls 5 individual packages, each doing a different piece of work.

I stage each file to a temp table, then perform some minor transformations, then union the rowsets from each file before landing to another temporary table. A subsequent package performs additional transformations that apply to the entire dataset, before inserting them to their final destination. Along the way, I reject some records based on the business rules applied. Each package in the entire Job is logging(OnError, TaskFailed, Pre and Post execute). Their are no errors being generated. No rows are being rejected to my reject tables either.

Without getting into the specific transforms, etc. being used in this complex process, has anyone seen similar unexplained behaviour? I've not been able to identify any pattern, except that it is usually only 1 or 2 of the record types (specific to a source file), that will ever not be loaded. No patterns around volumes for specific source files. There are some lookups and dedupes, however I've seen the records 'drop off' before reaching these transforms. I have noticed I had my final desination load not using fast load. However sometimes the records disappear before even getting to my final staging table which is inserting using fast load. I am going to turn on logging the pipelinerowssent event. Any other suggestions for troubleshooting/tracking down these disappearing records?

Thanks

Couple more clarifications:

I have run the same files thru manually in debug mode to find that I can watch all the rows thru the entire process.

We have seen strange behaviour in running packages as scheduled jobs thru SQLAgent

Utilizes unions which seem a bit klunky.

|||

Joe,

We have seen similar problems on my current project so yesterday we turned on OnPipelineRowsSent logging.

Another thing we have done is output the data from each transform component to a file for later examination. The MULTICAST transform is invaluable in order to do this.

as yet we haven't found out what is going on. Its strange.

-Jamie

|||

I'm not sure I would use the adjective strange, but..

In trying to troubleshoot this process, I first changed the union tranform that was taking 6 input streams, and busted it out to 5 individual waterfalling unions each with 2 input streams. No change in behaviour.

I then changed the package that moves this data, by adding in multicasts to output to a file after every transform along the way up to the final destination, after the 5 unions. Just by adding the multicasts into the flow has resulted in no rows vanishing for the daily loads for the past week. Unfortunately, I don't have time to really troubleshoot further, but I think that this demonstrates that there is indeed a serious bug here. I still suspect it has to do with the union transform. I am quite scared for anyone else's shop that has decided to standardize ETL to this tool, as we have. As developers, we have only time to test our code, not testing that the native tool functionality is behaving as expected. In addition ,to have to monitor on a regular basis that it is performing properly, is not acceptable.

Hoping this problem magically went away with SP1....

JH

Losing record after synchronizing subscriptions

Hi

We got a server running SQL server 2000 SP4 with a database that’s being replicated

There are 17 subscribers running MSDE SP4 using merge replication. Replication is started manualy

Initially we tested this with two subscriptions an everything went well, but now, since 3 months, we are facing a weird problem while sync'ing. We have massive data loss on records that where inserted at the subscribers. Records seem to disappear, but only record that have a foreign key constraint. What I mean is that for example a record is inserted at the table that holds our client records with primary key ‘ClientID’ and then a record is inserted in a table with actions for that client with a foreign key ‘ClientID’ referring to the client table. After sync’ing that client record is inserted correctly in database on the publisher but the records in the table with actions are gone.

As far as I know the tables are correctly formed with identity set not for replication and so on.

Shortly, I can’t find any problem, a specially when it doesn’t happen always.

If anyones has faced this and got a solution, please let me know.

Thanks.

Raf

Are there constraint violations that are happening?

Also look at the article property compensate_for_errors. it is turned ON by default. What this does is, say for an insert coming from publisher to subscriber, if there is a constraint violation or some other failure, a delete is sent back to the publisher resulting in data loss.

Set this propery to false and monitor your system for data integrity.

|||

Thanks

I'll check this out

losing permisions in tempdb

Hi,
I have noticed that everytime sql server restarts the permission on tempdb g
o away. Is there a way to fix this?
Also i thought this sort of information was stored in the master database?Tempdb is recreated whenever SQL Server is restarted. Guest
user exists in tempdb by default which is how users access
tempdb for temp tables and such. What permissions are
causing problems? Tempdb is recreated using the model
database as a template and maybe you have something wrong
with the model database. Hard to say as I don't know what
you are trying to accomplish or what problems the recreation
of tempdb is causing you.
-Sue
On Thu, 1 Apr 2004 08:11:16 -0800, "Jason"
<anonymous@.discussions.microsoft.com> wrote:

>Hi,
>I have noticed that everytime sql server restarts the permission on tempdb
go away. Is there a way to fix this?
>Also i thought this sort of information was stored in the master database?|||Generally you shouldn't be granting permissions to users in tempdb. What is
the requirement for this? (As Sue said, tempdb gets rebuilt every time the
SQL Instance or server is restarted - this is a desirable thing).|||Hi,
It is a vendor supplied solution. They use tempdb to store session state.
They didn't use a guest account to access tempdb though, there is an applica
tion account that reads and writes to tempdb with session state data.
When the server is bounced, SQL Server restarts and the application account
loses all privledges to the temp db.
Also, I thought all the privledge info was saved in the master db, it seems
based on this that at least some info is also stored in the target database
as well...can you explain, or point me to a book online chapter that explai
ns this.
thanks|||Try setting up whatever users, permissions the application
needs in model database. Tempdb is recreated using model as
a template.
The help topic Users in SQL Server books online explains the
difference between users and logins and some of what is
stored where. User accounts are specific to a database and
the user account is associated with permissions and object
ownership in a given database. Master will store information
on logins as well as user information specific to the master
database (not all of the databases).
-Sue
On Mon, 5 Apr 2004 06:16:06 -0700, jason_fin
<anonymous@.discussions.microsoft.com> wrote:

>Hi,
>It is a vendor supplied solution. They use tempdb to store session state.
They didn't use a guest account to access tempdb though, there is an applic
ation account that reads and writes to tempdb with session state data.
>When the server is bounced, SQL Server restarts and the application account
loses all privledges to the temp db.
>Also, I thought all the privledge info was saved in the master db, it seems
based on this that at least some info is also stored in the target database
as well...can you explain, or point me to a book online chapter that expla
ins this.
>thanks
>

Losing Oracle user name and password

I have written a simple SQL Server 2005 package to pull some data from Oracle (using ODBC) and pumping it into SQL Server. When I run it from the server in debug mode in VS it works fine. When I schedule the job it errors out with "ora-01005: null password given; logon denied." The password is there. Has anyone experienced this? Is there a security setting somewhere preventing me from saving passwords? Is there a work around? Thanks.

Passwords are not saved in a package unless in an encrypted format. Check the ProtectionLevel property of your package to set this up.

-Jamie

|||Thanks, that was it.sql

Losing odbc connection on install

We have a system which uses an ODBC connection to connect to SQL Server 2000
for Ceridian Prism--an application for HR departments. Now we are installing
a VB.Net application which uses MSDE (SQL 7). The problem is that for some
reason we are losing the original ODBC connection to 2000 when we install
MSDE. Now I realize that installing MSDE 2000 may help this issue, however
we really need to use SQL 7 for now. Does anyone have any ideas of what
might be causing the loss in the connection? Is the SQL 7 install
overwriting something that the ODBC needs for the 2000 connection? Is it
something with named instances? We are using the standard MSDE installation
from Microsoft.
Thanks.
My guess is that the MSDE 7 installation is installing an older version of
MDAC which is not ADO.Net compatible. It could also be an issue with named
instances as earlier version of MDAC (pre 2.5 I think) did not support named
instances.
Jim
"LisaConsult" <lisaconsult@.online.nospam> wrote in message
news:81EA76DC-A07B-4982-B9F2-CD31ACE1F0B0@.microsoft.com...
> We have a system which uses an ODBC connection to connect to SQL Server
> 2000
> for Ceridian Prism--an application for HR departments. Now we are
> installing
> a VB.Net application which uses MSDE (SQL 7). The problem is that for
> some
> reason we are losing the original ODBC connection to 2000 when we install
> MSDE. Now I realize that installing MSDE 2000 may help this issue,
> however
> we really need to use SQL 7 for now. Does anyone have any ideas of what
> might be causing the loss in the connection? Is the SQL 7 install
> overwriting something that the ODBC needs for the 2000 connection? Is it
> something with named instances? We are using the standard MSDE
> installation
> from Microsoft.
> Thanks.
|||Thanks for your response. Actually, we know that it is somehow SQL Server
related and not MDAC because once we uninstalled Server Manager and MSDE, the
connection worked fine again. As an aside, if they needed MDAC, we installed
2.6, but as I said, I don't believe this was the issue. Any other thoughts?
Thanks
"Jim Young" wrote:

> My guess is that the MSDE 7 installation is installing an older version of
> MDAC which is not ADO.Net compatible. It could also be an issue with named
> instances as earlier version of MDAC (pre 2.5 I think) did not support named
> instances.
> Jim
> "LisaConsult" <lisaconsult@.online.nospam> wrote in message
> news:81EA76DC-A07B-4982-B9F2-CD31ACE1F0B0@.microsoft.com...
>
>
|||Oops, my mistake, this app is actually still a VB6 app.
"LisaConsult" wrote:

> We have a system which uses an ODBC connection to connect to SQL Server 2000
> for Ceridian Prism--an application for HR departments. Now we are installing
> a VB.Net application which uses MSDE (SQL 7). The problem is that for some
> reason we are losing the original ODBC connection to 2000 when we install
> MSDE. Now I realize that installing MSDE 2000 may help this issue, however
> we really need to use SQL 7 for now. Does anyone have any ideas of what
> might be causing the loss in the connection? Is the SQL 7 install
> overwriting something that the ODBC needs for the 2000 connection? Is it
> something with named instances? We are using the standard MSDE installation
> from Microsoft.
> Thanks.
|||I still think that it is a problem with the data connection layer and not
SQL Server. Have you tried installing MDAC 2.8 after MSDE 7 is installed.
Jim
"LisaConsult" <lisaconsult@.online.nospam> wrote in message
news:2271E92F-AFE5-4C14-A8FA-2A05326227B6@.microsoft.com...[vbcol=seagreen]
> Thanks for your response. Actually, we know that it is somehow SQL Server
> related and not MDAC because once we uninstalled Server Manager and MSDE,
> the
> connection worked fine again. As an aside, if they needed MDAC, we
> installed
> 2.6, but as I said, I don't believe this was the issue. Any other
> thoughts?
> Thanks
> "Jim Young" wrote:

Losing my parameters and fields?

Dear MSDN!
I have designed a report in VS 2005 Report Server Projekt. The report
connects to a Analysis 2005 server. Everything works fine in Layout tab and
preview tab until i click data tab. After i have clicked the data tab the
report starts to show errors in the output and error list window:
[rsFieldReference] The Value expression for the textbox
â'Sales_Price_Gross_Incl_Discount_1â' refers to the field
â'Sales_Price_Gross_Incl_Discountâ'. Report item expressions can only refer to
fields within the current data set scope or, if inside an aggregate, the
specified data set scope.
Visual studio have automatically checked out the report and changed the xml,
deleted the parameters and field definitions.
What is the problem?
Thanks in advance!Hello Grundh,
I found some similar issue in our internal database. But I did not found
the solution yet.
I am performing research on this issue and I appreciate your patience.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.

Losing my margin!?

I am designing a report to output address labels (Avery 5160) and the
spacing of the data in the columns and the rows descrease as the print moves
down the page. By the final row of labels, the name line is in the row
preceeding row. I am using a list object and the margin settings are per
Avery's spec sheet.
Anyone else experience this and have a fix?
Thanks,
AndyWhat rendering output are you using? My guess is you'll have your best luck
with PDF or TIFF. HTML is pretty non-deterministic.
--
Cheers,
'(' Jeff A. Stucker
\
Business Intelligence
www.criadvantage.com
---
"Andrew King" <acking@.cal.ameren.com> wrote in message
news:%23DfU0Z0IFHA.1176@.TK2MSFTNGP15.phx.gbl...
>I am designing a report to output address labels (Avery 5160) and the
>spacing of the data in the columns and the rows descrease as the print
>moves down the page. By the final row of labels, the name line is in the
>row preceeding row. I am using a list object and the margin settings are
>per Avery's spec sheet.
> Anyone else experience this and have a fix?
> Thanks,
> Andy
>|||I am using PDF. I have also tried to fix the size of the fields; unchecked
the "Can increase to accommodate contents".
Andy
"Jeff A. Stucker" <jeff@.mobilize.net> wrote in message
news:%23869bC3IFHA.4028@.tk2msftngp13.phx.gbl...
> What rendering output are you using? My guess is you'll have your best
> luck with PDF or TIFF. HTML is pretty non-deterministic.
> --
> Cheers,
> '(' Jeff A. Stucker
> \
> Business Intelligence
> www.criadvantage.com
> ---
> "Andrew King" <acking@.cal.ameren.com> wrote in message
> news:%23DfU0Z0IFHA.1176@.TK2MSFTNGP15.phx.gbl...
>>I am designing a report to output address labels (Avery 5160) and the
>>spacing of the data in the columns and the rows descrease as the print
>>moves down the page. By the final row of labels, the name line is in the
>>row preceeding row. I am using a list object and the margin settings are
>>per Avery's spec sheet.
>> Anyone else experience this and have a fix?
>> Thanks,
>> Andy
>|||Solved! Placed the list object inside a rectangle to fix the size of the
label and removed the right and bottom padding from the field.
"Andrew King" <acking@.cal.ameren.com> wrote in message
news:e4hyBC$IFHA.2844@.TK2MSFTNGP10.phx.gbl...
>I am using PDF. I have also tried to fix the size of the fields; unchecked
>the "Can increase to accommodate contents".
> Andy
> "Jeff A. Stucker" <jeff@.mobilize.net> wrote in message
> news:%23869bC3IFHA.4028@.tk2msftngp13.phx.gbl...
>> What rendering output are you using? My guess is you'll have your best
>> luck with PDF or TIFF. HTML is pretty non-deterministic.
>> --
>> Cheers,
>> '(' Jeff A. Stucker
>> \
>> Business Intelligence
>> www.criadvantage.com
>> ---
>> "Andrew King" <acking@.cal.ameren.com> wrote in message
>> news:%23DfU0Z0IFHA.1176@.TK2MSFTNGP15.phx.gbl...
>>I am designing a report to output address labels (Avery 5160) and the
>>spacing of the data in the columns and the rows descrease as the print
>>moves down the page. By the final row of labels, the name line is in the
>>row preceeding row. I am using a list object and the margin settings are
>>per Avery's spec sheet.
>> Anyone else experience this and have a fix?
>> Thanks,
>> Andy
>>
>