Friday, March 30, 2012
Loss of server
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first detached it.
You *might* be able to attach is even if you didn't detached it first, but consider this as one of
those "lucky" situations. It is not guaranteed or documented.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was the
> case. If so, then presumably a SAN would present an even simplier solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is documented. That should
end the discussion.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data was
> lost (service might take longer to resume, but down time in our business is
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>sql
Loss of server
Having read BOL, I was of the understanding that if a machine was lost
(anything but disk failure) then it was very difficult to recover the data.
The reason being that the data- and log-files were still "attached" to the
dead SQLServer and needed to be detached from it before they could be used
again; a difficult operation if the machine is dead.
However, someone suggested that this was not the case. If a machine dies
then it is a simple operation to physically disconnect the disks from the
dead machine and connect them to a new machine and continue working. This
assumes the Standard Edition of SQLServer (i.e. no clustering involved).
I can test this, but it will take a few days to set the equipment up, so I
wondered in the mean time whether anyone could confirm whether this was the
case. If so, then presumably a SAN would present an even simplier solution,
particularly if the disk set is a RAID5+1 configuration?
Thanks in advance
GriffGriff,
The SQL Server documentation say that you can attach a database if you first
detached it.
You *might* be able to attach is even if you didn't detached it first, but c
onsider this as one of
those "lucky" situations. It is not guaranteed or documented.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...[vb
col=seagreen]
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the data
.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was th
e
> case. If so, then presumably a SAN would present an even simplier solutio
n,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>[/vbcol]|||Hi,
What is a server failure?
Which part(s) need to fail to give a server failure? CPU? Memory?
Motherboard? Disc Controller? Boot Disc? Master Database? Data drives? Log
Drives? PSU? etc?
You are highlighting the importance of DP (I prefer DP to DR - Disaster
Prevention is better than Cure). So, what can fail, what can you do to
prevent it? What do you do if it happens? Have you rehearsed for it? Does
the process work?
So a PSU blows up and takes the motherboard and CPU(s) with it. The
system/boot disc drive goes at the same time. Sounds like a server failure
to me. What do you do? Have DP? Then you may already have a standby server,
backup copies of databases on other computers, be using log shipping, and
have only to switch to standby... It is always better to be prepared before
the event than to rely on a toolkit to fish you out of some scenario after
an unpredictable event.
Recovering SQL Server databases in scenarios such as this is perhaps the
poorest documented part of SQL Server. What happens if the log drive dies at
run time? Or the data drive? Or the RAID controller? (That happened to me a
few weeks ago and was not pleasant, we did have DP in place however). We all
know the theory, but the answer is if you wish to get things back up and
running with least data-loss as the system is supposed to be designed, you
seem to have no choice but to ring MS 'cos if you ask here that is what they
will tell you to do.
So rule #1 for SQL Server DP: Don't lose the data.
Comments / constructive criticism welcome.
- Tim
"Griff" <Howling@.The.Moon> wrote in message
news:e7lkMBAkEHA.3148@.TK2MSFTNGP10.phx.gbl...
> Dear all
> Having read BOL, I was of the understanding that if a machine was lost
> (anything but disk failure) then it was very difficult to recover the
> data.
> The reason being that the data- and log-files were still "attached" to the
> dead SQLServer and needed to be detached from it before they could be used
> again; a difficult operation if the machine is dead.
> However, someone suggested that this was not the case. If a machine dies
> then it is a simple operation to physically disconnect the disks from the
> dead machine and connect them to a new machine and continue working. This
> assumes the Standard Edition of SQLServer (i.e. no clustering involved).
> I can test this, but it will take a few days to set the equipment up, so I
> wondered in the mean time whether anyone could confirm whether this was
> the
> case. If so, then presumably a SAN would present an even simplier
> solution,
> particularly if the disk set is a RAID5+1 configuration?
> Thanks in advance
> Griff
>|||Hi Tim
I agree with you completely. We use a server with RAID5+1 disks, and
implement log shipping onto a stand-by server. However, our consultant
pointed out that this provides us with a way of getting the service up
really quickly, but with loss of data (back to the last log that was
shipped). He suggested that the way to lose no data (providing that the
disks were not damaged) was to simply to disconnect the scsi cable to the
down server and connect them to the standby server and that way no data was
lost (service might take longer to resume, but down time in our business is
perceived as better than loss of data). I just wished to question whether
this really was technically possible/reliable.
Griff|||Griff,
See my earlier reply. I suggest you ask the consultant where his strategy is
documented. That should
end the discussion.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Griff" <Howling@.The.Moon> wrote in message news:OT$GeZBkEHA.1644@.tk2msftngp13.phx.gbl...[vb
col=seagreen]
> Hi Tim
> I agree with you completely. We use a server with RAID5+1 disks, and
> implement log shipping onto a stand-by server. However, our consultant
> pointed out that this provides us with a way of getting the service up
> really quickly, but with loss of data (back to the last log that was
> shipped). He suggested that the way to lose no data (providing that the
> disks were not damaged) was to simply to disconnect the scsi cable to the
> down server and connect them to the standby server and that way no data wa
s
> lost (service might take longer to resume, but down time in our business i
s
> perceived as better than loss of data). I just wished to question whether
> this really was technically possible/reliable.
> Griff
>[/vbcol]
Wednesday, March 28, 2012
Loops to create a page!
Hi,
Right, I have this problem, and it;s more through lack of understanding vb.net that well more then an actual problem I will out line what I want to do,
basically it all revolves around a page that needs to be built when navigated to so it can be easily updated without anyone having to edit the code.
Get all the table names from a database
Loop through each of the results to build a statement
Nest a 2nd loop to split the returned data from the correct table
Build a listbox for each table returned
This is what I have currently, this works but the problem is, if another course is added, someone will need to manually edit the code on the page to add a new
code to get the new course hence why I want to create a loop that gets all the data so all someone needs to do is put in the all table the new course name. Please note
I cut this down to just show 2 result but there is about 30 odd.
1DBConn =New SqlConnection("Server=SD02;Initial Catalog=WhoGetsWhat;Persist Security Info=True;UID=x;Password=xx")2'Dim DBDataAdapter As SqlDataAdapter3 DBDataAdapter =New SqlDataAdapter("Select AOE,ADC, FROM TBL_Role WHERE Title = @.ddlTitle", DBConn)4 DBDataAdapter.SelectCommand.Parameters.Add("@.ddlTitle", SqlDbType.NVarChar)5 DBDataAdapter.SelectCommand.Parameters("@.ddlTitle").Value = TitleDropDown.SelectedValue6 DBDataAdapter.Fill(DSPageData,"Courses")78'Loop through each record, split at + and place in ListBoxs910 VarDS = DSPageData.Tables("Courses").Rows(0).Item("AOE")11Dim VarArrayAs String() = VarDS.Split("+")12Dim iAs Integer13 For i = 0To VarArray.Length - 11415Dim liAs New ListItem()16 li.Text = VarArray(i).ToString()17 li.Value = VarArray(i).ToString()18Me.txtAOE.Items.Add(li)19Next i2021 VarDS = DSPageData.Tables("Courses").Rows(0).Item("ADC")22 VarArray =Nothing23 VarArray = VarDS.Split("+")24 i =Nothing25 For i = 0To VarArray.Length - 12627Dim liAs New ListItem()28 li.Text = VarArray(i).ToString()29 li.Value = VarArray(i).ToString()30Me.txtADC.Items.Add(li)31Next iNow here is my pseudo code to what I roughly want to do, hope it makes sense to someone and someone can point me in the correct direction. Please note,
I know the split bit works, so at the minute I am just trying to get the loop to get all my courses
1DBConn =New SqlConnection("Server=SD02;Initial Catalog=WhoGetsWhat;Persist Security Info=True;UID=wgw;PWD=wgwsql;")2 DBSelect.Connection = DBConn3 DBSelect.Connection.Open()4'Get the row number in the database5 DBSelect.CommandText ="SELECT COUNT(*) FROM TBL_All"6 DBResult = DBSelect.ExecuteScalar()7 DBSelect.Connection.Close()8Dim Count = DBResult9'Get all the Tables and Keys in the Database's10 DBDataAdapter =New SqlDataAdapter("SELECT * FROM TBL_All", DBConn)11 DBDataAdapter.Fill(DSPageData,"All")12'declare all loop vars13Dim XAs Integer14 Dim YAs Integer15 Dim iAs Integer16'Loops through all the tables17Dim DSArrayAs String() = DSPageData.Tables("All").Items()18For Y = 0To Count19Dim VarDSAs String() = DSPageData.Tables("All").Rows(0).Item(DSArray(Y))20Dim SplitArrayAs String() = VarDS.Split("+")212223For i = 0To SplitArray.Length - 124Dim LiAs New ListItem()25 Li.Text = SplitArray(i).ToString()26 Li.Value = SplitArray(i).ToString()27Me.txt & DSArray(Y) &.Items.Add(Li)28Next i2930Next Y
I have just realised, in each loop for each course I will need to add the select statement so I think I need to add
DBDataAdapter = New SqlDataAdapter("SELECT " & Dr & "FROM TBL_" & DSArray(Y), DBConn)
DBDataAdapter.Fill(DSPageData, "Courses")
Into it
ok think I am making some head way and I think I have logically what I want.
For every row of column TName in TBL_ALL, assign to var Dim TName
For every row of column PKey in TBL_ALL, assign to var Dim Pkey
For every row create statement "SELECT '" & Pkey & "' FROM '" & TName " "'")
Once I have made the loop that creates all of them select statements, I can run a loop to execute them,
and then a loop to assign the values to listboxes, but first step at a time.
Here is what I have so far, it doesn't work and I know its not right, but maybe someone can see where I am comming from
point my in a better direction. Please also note, the response.write bits are to check when my code i have working is working
but they are not needed
Thanks
Chris
1DBConn =New SqlConnection("Server=SD02;Initial Catalog=WhoGetsWhat;Persist Security Info=True;UID=x;PWD=xx;")2 DBSelect.Connection = DBConn3 DBSelect.Connection.Open()4'Get the row number in the database5 DBSelect.CommandText ="SELECT COUNT(*) FROM TBL_All"6 DBResult = DBSelect.ExecuteScalar()7 DBSelect.Connection.Close()8Dim Count = DBResult9'Get all the Tables and Keys in the Database's10 DBDataAdapter =New SqlDataAdapter("SELECT TName FROM TBL_All", DBConn)11 DBDataAdapter.Fill(DSPageData,"TName")12 DBDataAdapter =New SqlDataAdapter("SELECT PKey FROM TBL_All", DBConn)13 DBDataAdapter.Fill(DSPageData,"PKey")14'declare all loop vars1516Dim DrAs DataRow17Dim DcAs DataColumn18'Loops through all the tables1920 'For Each Dr In DSPageData.Tables("All").Rows21 'For Each Dc In DSPageData.Tables("All").Columns22 'Dim Column As DataColumnCollection = DSPageData.Tables("All").Columns23 'Dim Table As String = (Dr(Dc.ColumnName.ToString))24 'Response.Write(Table)25 'DBConn = New SqlConnection("Server=SD02;Initial Catalog=WhoGetsWhat;UID=wgw;PWD=wgwsql")26 'DBSelect.Connection = DBConn27 'DBSelect.Connection.Open()28 'DBSelect.CommandText = "SELECT '" & Dr(Dc.ColumnName.ToString) & "' FROM ('" & Table & "')"29 'DBSelect.ExecuteNonQuery()30 'Next Dc31 'Next Dr3233Dim Search()As SqlDataAdapter34Dim PKeyAs String35 Dim TNameAs String36 Dim ResultAs String()37For Each DrIn DSPageData.Tables("TName").Rows38For Each DcIn DSPageData.Tables("TName").Columns39'For i = 0 To Count40 PKey = (Dr(Dc.ColumnName.ToString))41 Response.Write(PKey)42 TName = (Dr(Dc.ColumnName.ToString))43 Response.Write(TName)44'DBSelect.CommandText = "SELECT '" & PKey & "' FROM '" & TName & "'"45 'DBResult = DBSelect.ExecuteNonQuery46 'Result(i) = DBResult47 'Next48Next49 Next50 For Each DrIn DSPageData.Tables("PKey").Rows51For Each DcIn DSPageData.Tables("PKey").Columns52'For i = 0 To Count53 PKey = (Dr(Dc.ColumnName.ToString))54 Response.Write(PKey)55 TName = (Dr(Dc.ColumnName.ToString))56 Response.Write(TName)57'DBSelect.CommandText = "SELECT '" & PKey & "' FROM '" & TName & "'"58 'DBResult = DBSelect.ExecuteNonQuery59 'Result(i) = DBResult60 'Next61Next62 Next63 End Sub|||
maybe I am totally confusing everyone lol? I am so lost :(
Monday, March 19, 2012
lookup task running very slow when deal w/big tables
I try to convert a Procedure that join 8 tables with INNER AND OUTER JOIN, my understanding is that the Lookup task is the one to use and I should break these joins into smaller block, it takes a long time to load when I do this, since each of these tables had 10-40mill. rows and I have 8 tables to go thru, currently this Stored Procedure took 3-4min to run, by converting this to 8 Lookup tasks, it ran for 20min. has anyone run into this issue before and know a work around for this.
Thanks
Are you aware of the different caching types of the Lookup Tranformation? by default the lookup caches the whole data set from your lookup table/query; if you are not using a query to limit the number of columns/rows to only the ones the lookup needs, so the cached data is limited to what you really need, this would be a good point to start. Try, if possible, to re-arrange the tasks in your data flow so your lookup transforms are based on the tables with the lesser rows.
You can see more information about lookup tranformation here: http://msdn2.microsoft.com/en-us/library/ms141821.aspx
Lookup transforms may not be the best approach for all cases; if the performace you are getting for having your dataset out of a single query/SP is better than the lookup tasks approcah why would you want to change it?
Thanks
Rafael Salas
|||Yes, I am setting 500mb Cache size and select 3 integer and 2 varchar(50) fields only, I also try to re-arrange the lookup tables, but since all of them are big, it's limited what I can do by re-arranging them. The reason that I am doing this is to evaluate if it pay to convert our current procedure or not, the main thing that I try to test is performance, it's only make sense if SSIS tasks can out performance stored procedure during our data pull. I wish that SSIS allows input parameters mapping in the reference table TAB. this will allows me to limit number of lookup rows that get load into SSIS. also I found that the OLE DB Source task is very picky w/what kind of SQL script it can run, if you have a sub query that contains parameter, it won't take it, even it's perfectly ok to run in TSQL, this is so bad since sometime you get better performance by putting parms into sub query to limit number of rows before you join to another huge table.
Anyway I think I found a work around, I am currently switched over to use Merge Join task, and it seems to work better. But it still had not out performance current stored procedure. I just hope that some one out there w/similar issue, so I can compare note.
Thanks
|||
by using Merge Join trasnformation , I think you may actually be walking away from any performace gain. By definition Merge Join is an asynchonous trasnformation, so it would use more resources(memory) than the lookup transformation and more likely to have poorer performance; it actually requieres the inputs to be sorted . I think there are good reasons for not having a parameter mapping ability in the reference table tab of the lookup and I just can see that it would requiere to execute a query for every row comming to the lookup input. Actually if you want to test it, you can go to advanced tab and edit the query to include and map your parameters; the performance will hit the floor.
BTW, 500Mb in cache size may no be enough for the volume of data in your lookups.
You can review an interesting paper about SSIS performance at:
http://www.microsoft.com/technet/prodtechnol/sql/2005/ssisperf.mspx
Sorry if my post were not helpfull at all
Rafael Salas
|||
Thanks for you responses, it does help. I am just a big disappointed, since all the seminars that I attended seems to advise people to convert their SQL script into SSIS tasks for better maintenance and performance, but I have yet to find a work around that can outperform current stored procedure, since our source schema is fairly complex and required a lot of joins in order to pull data out, I guess you were right that SSIS tasks may not be the best choice for every situation in term of performance. Also, you are right about the Merge Join task, it actually performs 2-3 times worse than current stored procedure, and I guess I will stay w /my stored procedures for now.
Thanks a lot your help