Friday, March 30, 2012
Loss of server
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first detached it.
You *might* be able to attach is even if you didn't detached it first, but consider this as one of
those "lucky" situations. It is not guaranteed or documented.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was the
> case. If so, then presumably a SAN would present an even simplier solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is documented. That should
end the discussion.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data was
> lost (service might take longer to resume, but down time in our business is
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>sql
Loss of server
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first
detached it.
You *might* be able to attach is even if you didn't detached it first, but c
onsider this as one of
those "lucky" situations. It is not guaranteed or documented.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...[vb
col=seagreen]
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data
.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was th
e
> case. If so, then presumably a SAN would present an even simplier solutio
n,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>[/vbcol]|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is
documented. That should
end the discussion.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...[vb
col=seagreen]
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data wa
s
> lost (service might take longer to resume, but down time in our business i
s
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>[/vbcol]
Losing my parameters and fields?
I have designed a report in VS 2005 Report Server Projekt. The report
connects to a Analysis 2005 server. Everything works fine in Layout tab and
preview tab until i click data tab. After i have clicked the data tab the
report starts to show errors in the output and error list window:
[rsFieldReference] The Value expression for the textbox
â'Sales_Price_Gross_Incl_Discount_1â' refers to the field
â'Sales_Price_Gross_Incl_Discountâ'. Report item expressions can only refer to
fields within the current data set scope or, if inside an aggregate, the
specified data set scope.
Visual studio have automatically checked out the report and changed the xml,
deleted the parameters and field definitions.
What is the problem?
Thanks in advance!Hello Grundh,
I found some similar issue in our internal database. But I did not found
the solution yet.
I am performing research on this issue and I appreciate your patience.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.
Monday, March 26, 2012
looping through field in table
I have a table, and in one of the fields is the following information:
1,2,3,4,5, etc...
How do I 'loop' through this field to remove each comma and put the number into it's own field in another table - so remove the first comma, and put the number 1 into its own field, remove the 2nd comma and put the number 2 into it's own field etc, until all the numbers are in their own fields.
So it ends up being like this:
Col1 Col2 Col3 Col4 Col5 Col n.............
1 2 3 4 5 n...........
There may not be 5 numbers in the field, sometimes more, sometimes less, so i need to be able to tell when there are no more commas and numbers left
Your help is much appreciated
Thanks
GillI have a function for extracting the nth element of a delimited string, if you can use functions try this:
-- =============================================
-- Create inline function
-- =============================================
IF objectproperty(object_id(N'GetStringElement'),'IsS calarFunction') = 1
DROP FUNCTION GetStringElement
GO
CREATE FUNCTION GetStringElement(
@.String varchar(100)
, @.Element int
, @.Seperator char(1) = ',')
RETURNS varchar(100)
AS
begin
declare @.Pass int, @.Index int, @.LastIndex int, @.Return varchar(100)
if (@.String like '%' + replicate(',%',@.Element - 1)) begin
select @.Pass = 1
, @.Index = 1
, @.LastIndex = 0
while (@.Pass <= @.Element) begin
select @.LastIndex = case @.Index when 1 then 0 else @.Index end
, @.Index = charindex(',',@.String,@.Index + 1)
, @.Pass = @.Pass + 1
end
if (@.LastIndex > 0 and @.Index = 0) set @.Index = len(@.String) + 1
set @.Return = substring(@.String,@.LastIndex + 1, @.Index - @.LastIndex - 1)
end
RETURN @.Return
end
GO
if object_id('tempdb..#psy') is not null
drop table #psy
create table #psy(f1 int identity(1,1) not null,f2 varchar(25),col1 varchar(10),col2 varchar(10),col3 varchar(10),col4 varchar(10),col5 varchar(10),col6 varchar(10))
insert into #psy (f2) values('1,2,3,4,5')
insert into #psy (f2) values('a,b,c')
insert into #psy (f2) values('George,John,Paul,Ringo')
select * From #psy
declare @.pass int, @.ColCount int, @.TSQL varchar(255)
select @.pass = 1
, @.ColCount = 6
while (@.pass <= @.ColCount) begin
select @.TSQL = 'update #psy ' +
'set col' + cast(@.pass as varchar) + ' = dbo.GetStringElement(f2,' + cast(@.pass as varchar) + ',default)'
, @.pass = @.pass + 1
exec(@.TSQL)
end
select * from #psy
IF objectproperty(object_id(N'GetStringElement'),'IsS calarFunction') = 1
DROP FUNCTION GetStringElement
if object_id('tempdb..#psy') is not null
drop table #psy|||if functions are not your thing you could try:
if object_id('tempdb..#psy') is not null
drop table #psy
create table #psy(f1 int identity(1,1) not null,f2 varchar(25),col1 varchar(10),col2 varchar(10),col3 varchar(10),col4 varchar(10),col5 varchar(10),col6 varchar(10))
insert into #psy (f2) values('1,2,3,4,5')
insert into #psy (f2) values('a,b,c')
insert into #psy (f2) values('George,John,Paul,Ringo')
select * From #psy
declare @.RecordID int, @.TSQL varchar(255), @.pass int
, @.Index int, @.LastIndex int, @.Return varchar(100)
, @.String varchar(100), @.Element int
select @.RecordID = min(f1) from #psy
while (@.RecordID is not null) begin
select @.String = f2 from #psy where f1 = @.RecordID
select @.Index = 1
, @.LastIndex = 0
, @.pass = 1
while (@.Index > 0) begin
select @.LastIndex = case @.Index when 1 then 0 else @.Index end
, @.Index = charindex(',',@.String,@.Index + 1)
if (@.LastIndex > 0 and @.Index = 0)
set @.Return = substring(@.String,@.LastIndex + 1, 100)
else
set @.Return = substring(@.String,@.LastIndex + 1, @.Index - @.LastIndex - 1)
select @.TSQL = 'update #psy ' +
'set col' + cast(@.pass as varchar) + ' = ''' + @.Return + ''' ' +
'where f1 = ' + cast(@.RecordID as varchar)
, @.pass = @.pass + 1
exec(@.TSQL)
end
select @.RecordID = min(f1) from #psy where f1 > @.RecordId
end
select * from #psy
Monday, March 19, 2012
LookUP Transform...
Dear friends,
I have a ETL that have a Lookup transform to get a rate from a table SpotRates.
The problem is when the match od some date in SpotRates Table doens't exist...
And for that records I need to lookup for next date...
For example...
SpotRate Table
When I first try to lookup the date 17-04-2006, doesnt give me any records... and I need to create a new lookup for the next date from 17-04-2006. And in this example the next date is 18-04-2006.. How can I do it?
I made a sql query date gives me the next date with 2 parameters ... but I'm having some errors...
SELECT TOP 1 Data
FROM Spot_Rates
WHERE (Currencies_Name = ?) AND (Data > CONVERT(DATETIME, ?, 102))
ORDER BY Data DESC
In this exampple, the parameters returned from lookup1 is:
Currencies_name= 'DOLAR ESTADOS UNIDOS'
DATE='17-04-2006'
I need to create a second lookup transform to return the next date/currency for each row that didnt match in the first lookup...
Regards,
Pedro
What are your errors?I also think you'd want "Data >= CONVERT(DATETIME......)"|||
I have tried this way...
In lookUp transformation in 1oTab "Reference Table" I inserted SQL query:
SELECT Data, RevalRate, Currencies_Name
FROM Spot_Rates
In 2oTab "Columns" I'm inserted the relations between input/output and check the field that I want for output.
In 3oTab "Advanced" I'm inserted :
select TOP 1 * from
(SELECT Data, RevalRate, Currencies_Name
FROM Spot_Rates) as refTable
WHERE ([refTable].[Currencies_Name] = ?) AND ( [refTable].[Data] > CONVERT(DATETIME, ?, 102))
ORDER BY Data DESC
But at this moment the package are spending a lot of time to finalize... I still waiting... :-(
Regards,
pedro
|||And it's not being return the correct revalrate for each row with Currencies_Name/Date...
It's returning the last date for this currency and not for input parameter date for each row... :-(
Regards
|||Do you have indexes at all on the Spot_Rates table? How large is the Spot_Rates table?|||USE [dbRentabilidade]
GO
/****** Object: Table [dbo].[Spot_Rates] Script Date: 04/16/2007 15:36:35 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Spot_Rates](
[Currencies_ShortName] [nvarchar](3) NULL,
[Currencies_Name] [varchar](32) NULL,
[Data] [datetime] NULL,
[RevalRate] [float] NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
About 55.000 rows...
|||Pedro,
I would like to make some points:
The query needs to have an OR clause; so it retrives the exact match or the nearest greater one. If you are going to use TOP 1; the query must guarentee that resultset is ordered ASC If you use a Lookup transform the way you are using it (query w/parameters) it will work using partial cache; which menas it will run the query for each row passing through. That is way it is so slow. How many rows are passing through? If the Spot_dates table and the rows passing through the pipeline are in the same database; you are better of creating a database function to retrive the right RevalRate. That way the DB engine will carry the overhead.|||In spite of looup transform, i'm trying to use OLE DB Command... executing the following SQL Stored Procedure:
USE [dbRentabilidade]
GO
/****** Object: StoredProcedure [dbo].[R_SP_GET_NextDayRate] Script Date: 04/16/2007 18:17:33 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[R_SP_GET_NextDayRate]
@.MyCurrency varchar(32),
@.MyDate datetime
AS
SELECT TOP 1 Data AS TESTE
FROM Spot_Rates
WHERE (Currencies_Name = @.MyCurrency) AND (Data > CONVERT(DATETIME, @.MyDate, 102))
ORDER BY Data ASC
RETURN
But the problem is that I dont know how to read the values returned from SP in order to continue the dataflow with this values... IT'S Possible?!
|||It looks like you cannot get columns back to the dataflow pipe line via oledb command. But here is the thing, even if you find the way; the performance will be about the same than the one you get using the Lookup component. That is why I was asking if you could include the join or function right on your source component.|||And what you think about insert the non matched values into temp table, and use a lookup to find the rate in this temp table to continue the dataflow?
what you think?
|||I have created a OLEDB Command to insert values that didnt match the lookup in a table, and have a loopup to match the values in thsi table...
The problem now, is that in the dataflow, the values inserted by OLE DB Command are not refreshed in the dataflow in the moment that I do a new match in the lookup transformation to this table...
...
7. LookUp
7.1. MutiCast
7.2. OLE DB Command
7.2.1 LookUp
And the values inserted in OLE DB Command are in the table database, but not in the dataflow in the moment that data is passing throut lookup transformation....
Could someone help me?
Thanks
|||I am getting lost with your approach; but in general you could break the process in 2 data flows; the first one to insert the no matches into the temp table; the second to do the lookup. does this make sense?|||
PedroCGD wrote:
I have created a OLEDB Command to insert values that didnt match the lookup in a table, and have a loopup to match the values in thsi table...
The problem now, is that in the dataflow, the values inserted by OLE DB Command are not refreshed in the dataflow in the moment that I do a new match in the lookup transformation to this table...
...
7. LookUp
7.1. MutiCast
7.2. OLE DB Command
7.2.1 LookUp
And the values inserted in OLE DB Command are in the table database, but not in the dataflow in the moment that data is passing throut lookup transformation....
Could someone help me?
Thanks
Enable memory restriction in your second lookup (on the advanced tab). This will slow down the data flow, but it's the only way you will see the new data in the lookup.
I've been watching your threads on this over the last day. Have you considered just populating your rate table with data for each day? Even if you use a seperate data flow to create a "temporary" working table with all the dates filled in, it would still make your process much simpler to implement and maintain.
|||Make sense and I already tried it before... but when I do the lookup the data not exist yet... :-(
Regards,
Pedro
|||jwelch,
But we are talking about millions of rows...
I'm trying to include one more lookup... but I dont know if it will work,... i give you feedback soon!
Thanks!
Friday, March 9, 2012
Looking what tables have data
Dear all,
This issue to encompass eitherSql2k or Sql25k.
I am looking for any script which might be enough smart to know which tables have data and which doesn’t and therefore show the info
I imagine any cursor with SELECT COUNT(*) or something like that.
Something like 'sp_tables' system stored procedure but along with the info the rows
Thanks in advance,
Enric
Moving to a more suitable group, this is not a SSIS question.
Plenty of ways to do this, but here is a simple script I keep in my "toolbox"-
IF OBJECT_ID('tempdb..#Table_SpaceUsed') IS NOT NULL DROP TABLE #Table_SpaceUsed
CREATE TABLE #Table_SpaceUsed
(
name sysname, rows int,
reserved varchar(20),
data varchar(20),
index_size varchar(20),
unused varchar(20)
)
DECLARE @.table sysname
DECLARE tables_cursor CURSOR FOR
SELECT name from sysobjects where type = 'U'
OPEN tables_cursor
FETCH NEXT FROM tables_cursor
INTO @.table
WHILE @.@.FETCH_STATUS = 0
BEGIN
INSERT #Table_SpaceUsed
EXEC sp_spaceused @.table
FETCH NEXT FROM tables_cursor
INTO @.table
END
CLOSE tables_cursor
DEALLOCATE tables_cursor
GO
ALTER TABLE #Table_SpaceUsed ADD reserved_mb int
ALTER TABLE #Table_SpaceUsed ADD data_mb int
ALTER TABLE #Table_SpaceUsed ADD data_kb int
ALTER TABLE #Table_SpaceUsed ADD index_size_mb int
ALTER TABLE #Table_SpaceUsed ADD index_size_kb int
ALTER TABLE #Table_SpaceUsed ADD unused_mb int
GO
UPDATE #Table_SpaceUsed
SET reserved_mb = CASE
WHEN RIGHT(reserved, 3) = ' MB' THEN CAST(REPLACE(reserved, ' MB', '') AS int)
WHEN RIGHT(reserved, 3) = ' KB' THEN ROUND(CAST(REPLACE(reserved, ' KB', '') AS numeric(18,2)) / 1024.00, 0) END,
data_mb = CASE
WHEN RIGHT(data, 3) = ' MB' THEN CAST(REPLACE(data, ' MB', '') AS int)
WHEN RIGHT(data, 3) = ' KB' THEN ROUND(CAST(REPLACE(data, ' KB', '') AS numeric(18,2)) / 1024.00, 0) END,
data_kb = CASE
WHEN RIGHT(data, 3) = ' MB' THEN ROUND(CAST(REPLACE(data, ' MB', '') AS numeric(18,2)) / 1024.00, 0)
WHEN RIGHT(data, 3) = ' KB' THEN CAST(REPLACE(data, ' KB', '') AS int) END,
index_size_mb = CASE
WHEN RIGHT(index_size, 3) = ' MB' THEN CAST(REPLACE(index_size, ' MB', '') AS int)
WHEN RIGHT(index_size, 3) = ' KB' THEN ROUND(CAST(REPLACE(index_size, ' KB', '') AS numeric(18,2)) / 1024.00, 0) END,
index_size_kb = CASE
WHEN RIGHT(index_size, 3) = ' MB' THEN ROUND(CAST(REPLACE(index_size, ' MB', '') AS numeric(18,2)) / 1024.00, 0)
WHEN RIGHT(index_size, 3) = ' KB' THEN CAST(REPLACE(index_size, ' KB', '') AS int) END,
unused_mb = CASE
WHEN RIGHT(unused, 3) = ' MB' THEN CAST(REPLACE(unused, ' MB', '') AS int)
WHEN RIGHT(unused, 3) = ' KB' THEN ROUND(CAST(REPLACE(unused, ' KB', '') AS numeric(18,2)) / 1024.00, 0) END
SELECT name, rows, CASE WHEN rows > 0 THEN CAST(ROUND(CAST((data_kb+index_size_kb)AS numeric(38,6)) / CAST(rows AS numeric(38,6)), 2) AS numeric(10,2)) ELSE 0 END AS rows_size_kb, reserved_mb, data_mb, index_size_mb, unused_mb
FROM #Table_SpaceUsed ORDER BY reserved_mb DESC
Looking for the meaning of some labels
Dear fellows,
LOP_BEGIN_XACT, LOP_COMMIT_XACT,
LOP_INSERT_ROWS
Labels as the ones are easy recognizable when you explore LDFS by mean DBCC or whatever but on the contrary the following ones are tricky to reach the meaning:
LOP_IDENTITY_TYPE
LOP_DELTA_SYSIND
LOP_SET_BITS
Does anyone have any link related with this? I tried hard to search by Goog or something like that but unsuccessfully at all.
Hi Enric,
These are internal structures, can you explain why you need these?
-Matt
|||As Matt says, these are internal structures, and aren't documented in public forums. If you have a need to see these structures, please continue the conversation offline with Don Vilen donv@.microsoft.com