Showing posts with label design. Show all posts
Showing posts with label design. Show all posts

Friday, March 9, 2012

Looking for Wayne Snyder

Wayne,
Taking LearnKey course SQL 2000 Implementing DB Design. In session 8 you
use a file called "bulk insert package.dts", I have looked for the script on
all of the LearnKey downloads and the CD and can not find it. Is there any
place I can get it?
Actually I was able to recreate the bulk insert package, but the rest of the
*.dts files are missing, it would be real helpful to have then to go along
with the course.
"DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
> Wayne,
> Taking LearnKey course SQL 2000 Implementing DB Design. In session 8
> you use a file called "bulk insert package.dts", I have looked for the
> script on all of the LearnKey downloads and the CD and can not find it. Is
> there any place I can get it?
>
|||hi
got his email
visit this site
http://www.solidqualitylearning.com/...AboutWayne.htm
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"DazedAndConfused" wrote:

> Actually I was able to recreate the bulk insert package, but the rest of the
> *.dts files are missing, it would be real helpful to have then to go along
> with the course.
> "DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
> news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
>
>

Looking for Wayne Snyder

Wayne,
Taking LearnKey course SQL 2000 Implementing DB Design. In session 8 you
use a file called "bulk insert package.dts", I have looked for the script on
all of the LearnKey downloads and the CD and can not find it. Is there any
place I can get it?Actually I was able to recreate the bulk insert package, but the rest of the
*.dts files are missing, it would be real helpful to have then to go along
with the course.
"DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
> Wayne,
> Taking LearnKey course SQL 2000 Implementing DB Design. In session 8
> you use a file called "bulk insert package.dts", I have looked for the
> script on all of the LearnKey downloads and the CD and can not find it. Is
> there any place I can get it?
>|||hi
got his email
visit this site
http://www.solidqualitylearning.com/Resumes/Wayne/AboutWayne.htm
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"DazedAndConfused" wrote:
> Actually I was able to recreate the bulk insert package, but the rest of the
> *.dts files are missing, it would be real helpful to have then to go along
> with the course.
> "DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
> news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
> > Wayne,
> >
> > Taking LearnKey course SQL 2000 Implementing DB Design. In session 8
> > you use a file called "bulk insert package.dts", I have looked for the
> > script on all of the LearnKey downloads and the CD and can not find it. Is
> > there any place I can get it?
> >
>
>

Looking for Wayne Snyder

Wayne,
Taking LearnKey course SQL 2000 Implementing DB Design. In session 8 you
use a file called "bulk insert package.dts", I have looked for the script on
all of the LearnKey downloads and the CD and can not find it. Is there any
place I can get it?Actually I was able to recreate the bulk insert package, but the rest of the
*.dts files are missing, it would be real helpful to have then to go along
with the course.
"DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
> Wayne,
> Taking LearnKey course SQL 2000 Implementing DB Design. In session 8
> you use a file called "bulk insert package.dts", I have looked for the
> script on all of the LearnKey downloads and the CD and can not find it. Is
> there any place I can get it?
>|||hi
got his email
visit this site
http://www.solidqualitylearning.com.../AboutWayne.htm
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"DazedAndConfused" wrote:

> Actually I was able to recreate the bulk insert package, but the rest of t
he
> *.dts files are missing, it would be real helpful to have then to go along
> with the course.
> "DazedAndConfused" <AceMagoo61@.yahoo.com> wrote in message
> news:eD%23xNR7sFHA.2592@.TK2MSFTNGP09.phx.gbl...
>
>

Looking for tool

I'm looking for a MSSQL diagramming tool to replace the one in SQL2K.
A straight schema-import is okay, or if I can do round-trip design or
totally replace Enteprise Manager and have even more value add, that's great
too. A DB-specific tool would be better than a diagrammer w/import - but
I'm open to that if I find one that suites. (I have Viso, but not
enteprise - I'll check it out.) The main diagramming feature I want is nicer
and more informative gfx, especailly the ability to have the relationship
lines point to/from the exact field positions. Free is good, but don't mind
paying a few hundred bucks -- don't want to pay thousands. Any
recommendations?
thanks, m
Visio Enterprise might be a good option.
WinSQL has the ability to draw ER diagrams. http://www.synametrics.com/
--=20
Keith
"Mike" <vimakefile@.yahoo.com> wrote in message =
news:OQff0hdOEHA.1456@.TK2MSFTNGP09.phx.gbl...
> I'm looking for a MSSQL diagramming tool to replace the one in SQL2K.
> A straight schema-import is okay, or if I can do round-trip design or
> totally replace Enteprise Manager and have even more value add, that's =
great
> too. A DB-specific tool would be better than a diagrammer w/import - =
but
> I'm open to that if I find one that suites. (I have Viso, but not
> enteprise - I'll check it out.) The main diagramming feature I want is =
nicer
> and more informative gfx, especailly the ability to have the =
relationship
> lines point to/from the exact field positions. Free is good, but don't =
mind
> paying a few hundred bucks -- don't want to pay thousands. Any
> recommendations?
>=20
> thanks, m
>=20
>

Wednesday, March 7, 2012

Looking for table design pattern for different value types

Hi,

I need to store a list of parameters in a database. Each parameter has a name, description, comment and a value. Easy so far.

However the values are of different types. Each individual parameter has a value which may be of type int, decimal, string, boolean, custom type etc.

Which table design pattern is most appropriate?
We have a heated in-house discussion and I need supporting arguments.


Options explored so far:

1) (De-)serializing the value to a string-type.
2) Adding a column for each type, using only one column at a time.
3) Adding extra value-tables, one table for each type.
The disadvantages for each option are obvious and the basis for our discussion.

Your help in this matter will be appreciated.
Regards, Tonn

Tonn:

Can you take advantage of the SQL_VARIANT data type? Something like:

drop table dbo.parameter
go

create table dbo.parameter
( parmName varchar (40),
parmDescription varchar (80),
parmType tinyint,
parmValue sql_variant
)
go

insert into parameter values ('Integer Parm', 'Just an integer parameter', 1, 1)
insert into parameter values ('Numeric (9,2) parm', 'Yeah', 2, cast (17.50 as numeric (9,2)))
insert into parameter values ('Varchar parm', 'A varchar parm', 3, 'Yes, a varchar')
select * from parameter

-- Sample Output:

-- Warning: The table 'parameter' has been created but its maximum row size (8164) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.

-- parmName parmDescription parmType parmValue
-- -- -- --
-- Integer Parm Just an integer parameter 1 1
-- Numeric (9,2) parm Yeah 2 17.50
-- Varchar parm A varchar parm 3 Yes, a varchar

|||

Hi Waldrop,

Thanks for your reply.
It most certainly is an interesting suggestion. It's like option no.1 except that the serialization has been moved to the database.

Is this the preferred way of handling this type of problem? We're in the financial market and this is a common problem for us, but there seems to be no documentation, best practices etc. available so our individual programmers tend to insist on their personal preference.

|||

I think that your purpose is the type of application that the SQL_VARIANT datatype is intended to address. It would be helpful if you could provide some sample update or select statements where you might use your proposed column. I don't think I would want to have a column for each different potential datatype. Below is an example of how you might load the data into variables. Note the need to CAST the data:


declare @.numericVar numeric (9,2)
declare @.varcharVar varchar (80)
declare @.intVar integer

set @.numericVar = (select cast (parmValue as numeric (9,2)) from parameter where parmName = 'Numeric (9,2) parm')
set @.varcharVar = (select cast (parmValue as varchar (80)) from parameter where parmName = 'Varchar parm')
set @.intVar = (select cast (parmValue as integer) from parameter where parmName = 'Integer Parm')

select @.numericVar as [@.numericVar],
@.varcharVar as [@.varcharVar],
@.intVar as [@.intVar]

-- S A M P L E O U T P U T :

-- @.numericVar @.varcharVar @.intVar
-- -- --
-- 17.50 Yes, a varchar 1


Dave

|||

Hi Dave,
I think you've already made your point and put your suggestion at the top of the list.

It's no use sending an example since every proposed alternative so far works. I'm actually looking for some form of authority to stop spending all those man-hours debating about it.
In the past I've had success regarding programming issues by referencing Gamma's "Design Patterns". I was hoping to do the same for database issues, but I found out that it's not going to be that easy.

Looking for some wisdom

My company has a sql database with a 100 gig database. There are many
performance issues that I believe are linked to database design and
programming. I have a couple of questions that I hope can be
answered.
The database only has 26 gig of real data the rest are indexes.
Is this normal? I know the extra indexes cause performance problems
with inserts,updates and deletes.
The databse has huge stored procedures many pages long. Is it the
right thing to do putting all the work onto the sql server itself?
Shouldn't these long procedures be handled in the middle tier using vb
or c?
Triggers using inserted and deleted tables. These triggers are used
on tansactions for inserts udates and deletes on the database. From
what I have seen monitoring the server these triggers run twice as
long as the update delete or insert and since the trigger is fired
during a transaction I would guess that the transaction is not
commited until the trigger is done. Would I be correct in assuming
this?
Thats all I have for right now any help would be great. If you had
any documention to back this up would help alot. I seem to be in a
battle with the programming group on this whole performance issue. By
the way the server hardware is dual 2 gig xeons 4 gig memory 165 gig
hd space on raid 5.
Jim
jmaddox@.oaktreesys.comi have frequently seen databases where there was as much
space used for indexes as for data
i think the highest index to data size ratio i saw was
~2X, and i felt that one had unnecessary indexes. good
table design is also part of index minimization.
btw, each index adds between 15-40% overhead to the base
cost of modifying a row, depending on a few factors (SQL
Server Connections conference, Oct 2003, SDB417)
i like to maintain a script that executes each sp once
(plus consideration for multiple code paths)
then i can drop indexes one by one to look for table scans.
a big problem with very long sp's is recompiles, an insert
into a temp table or other factor could trigger a
recompile of the entire sp (fixed in Yukon), so if can't
fix the cause of the recompile, breaking a big proc into
smaller procs can be helpful
i prefer using sprocs and not triggers. triggers are good
if you are using sql statements, so you need only one
network roundtrip to handle the complete transaction.
i believe triggers to be less efficient in multi-row
operations, where the trigger may fire once per row,
>--Original Message--
>My company has a sql database with a 100 gig database.
There are many
>performance issues that I believe are linked to database
design and
>programming. I have a couple of questions that I hope
can be
>answered.
>The database only has 26 gig of real data the rest are
indexes.
>Is this normal? I know the extra indexes cause
performance problems
>with inserts,updates and deletes.
>The databse has huge stored procedures many pages long.
Is it the
>right thing to do putting all the work onto the sql
server itself?
>Shouldn't these long procedures be handled in the middle
tier using vb
>or c?
>Triggers using inserted and deleted tables. These
triggers are used
>on tansactions for inserts udates and deletes on the
database. From
>what I have seen monitoring the server these triggers run
twice as
>long as the update delete or insert and since the trigger
is fired
>during a transaction I would guess that the transaction
is not
>commited until the trigger is done. Would I be correct
in assuming
>this?
>Thats all I have for right now any help would be great.
If you had
>any documention to back this up would help alot. I seem
to be in a
>battle with the programming group on this whole
performance issue. By
>the way the server hardware is dual 2 gig xeons 4 gig
memory 165 gig
>hd space on raid 5.
>Jim
>jmaddox@.oaktreesys.com
>.
>|||I don't think triggers fire once per row in SQL Server since there is not
ROW level triggers like in Oracle.
They are set based only. But beware of triggers since they make it harder to
follow the flow of what is happening.
If you are running big inserts, deletes or updates (multiple rows per
command) and use the inserted, deleted table in joins in the trigger I'm not
sure the performance will be incredible. You should probably try to see if
sp's with all the logic of what the triggers are doing could be created and
called instead of relying on the trigger processing.
Triggers are part of your transaction. So if these commands are long lasting
and touch lots of data you can get into blocking problems. Which obviously
doesn't help performance. In a sp you could control the transactions
explicitly and commit (or rollback) at more than one point.
As for having sp's or a middle tier in vb or other :
I prefer having SQL code located on SQL Server. This way it's easy to
isolate and change SQL code that's not optimal. You can see that is what MS
thinks also in Yukon by having SQL Server host .NET so we can create more
complex procs.
The middle tier can perhaps generate the commands used to access the DB but
it should put it in a proc and use that next time around. This way the
middle tier can call one proc to return multiple datasets instead of
executing each command separately occurring a round trip each time. Also
your middle tier can perhaps cache some amount of data as to not always hit
the DB.
As for your index check this site out :
http://www.sql-server-performance.com/optimizing_indexes.asp
The above site holds a great deal of info you should probably browse it and
you will surely find a load of answers to your questions.
Chris.
"joe chang" <anonymous@.discussions.microsoft.com> wrote in message
news:0c7201c3be83$258e93c0$a001280a@.phx.gbl...
> i have frequently seen databases where there was as much
> space used for indexes as for data
> i think the highest index to data size ratio i saw was
> ~2X, and i felt that one had unnecessary indexes. good
> table design is also part of index minimization.
> btw, each index adds between 15-40% overhead to the base
> cost of modifying a row, depending on a few factors (SQL
> Server Connections conference, Oct 2003, SDB417)
> i like to maintain a script that executes each sp once
> (plus consideration for multiple code paths)
> then i can drop indexes one by one to look for table scans.
> a big problem with very long sp's is recompiles, an insert
> into a temp table or other factor could trigger a
> recompile of the entire sp (fixed in Yukon), so if can't
> fix the cause of the recompile, breaking a big proc into
> smaller procs can be helpful
> i prefer using sprocs and not triggers. triggers are good
> if you are using sql statements, so you need only one
> network roundtrip to handle the complete transaction.
> i believe triggers to be less efficient in multi-row
> operations, where the trigger may fire once per row,
> >--Original Message--
> >My company has a sql database with a 100 gig database.
> There are many
> >performance issues that I believe are linked to database
> design and
> >programming. I have a couple of questions that I hope
> can be
> >answered.
> >
> >The database only has 26 gig of real data the rest are
> indexes.
> >Is this normal? I know the extra indexes cause
> performance problems
> >with inserts,updates and deletes.
> >
> >The databse has huge stored procedures many pages long.
> Is it the
> >right thing to do putting all the work onto the sql
> server itself?
> >Shouldn't these long procedures be handled in the middle
> tier using vb
> >or c?
> >
> >Triggers using inserted and deleted tables. These
> triggers are used
> >on tansactions for inserts udates and deletes on the
> database. From
> >what I have seen monitoring the server these triggers run
> twice as
> >long as the update delete or insert and since the trigger
> is fired
> >during a transaction I would guess that the transaction
> is not
> >commited until the trigger is done. Would I be correct
> in assuming
> >this?
> >
> >Thats all I have for right now any help would be great.
> If you had
> >any documention to back this up would help alot. I seem
> to be in a
> >battle with the programming group on this whole
> performance issue. By
> >the way the server hardware is dual 2 gig xeons 4 gig
> memory 165 gig
> >hd space on raid 5.
> >
> >Jim
> >jmaddox@.oaktreesys.com
> >.
> >

Saturday, February 25, 2012

Looking for physical design suggestions...

I got a server that has a RAID-5 array partitioned into C: and D:
drives (OS Win2K Adv. Server installed on C:). The server also has a
mapping to a NAS device using the latest protocols that trick the
system into thinking the map is actually a local SCSII drive. That's
drive X:.
This server is used only for SQL, and contains an OLTP database that
sees a lot of use and is pretty heavily indexed.
I am toying with the idea of centralizing my data storage on the NAS
(data center network segment is 1-gigabit ethernet). So I was
thinking about putting my primary data file on the NAS (drive X:) and
keeping all tables there, creating a secondary data file on local
RAID-5 (drive D:) and putting all non-clustered indexes there, as well
as keeping the tempdb there and specifying the sort in tempdb option.
Log files would also remain on D:.

If anyone can suggest a better scenario given the above setup - I'd
love to hear it. Much appreciated.

Alexey AksyonenkoAlthough it is technically possible to put SQLServer database files on a NAS
drive, this is not supported by Microsoft and they strongly recommended that
you don't do it. With a NAS drive you lose many of the benefits of a
client-server database, you will get poor performance and you risk
corrupting your data.

Databases belong on direct-attached or SAN storage.

--
David Portas
----
Please reply only to the newsgroup
--

"Alexey Aksyonenko" <Alexey.Aksyonenko@.coanetwork.com> wrote in message
news:1449e414.0309260612.7558f05f@.posting.google.c om...
> I got a server that has a RAID-5 array partitioned into C: and D:
> drives (OS Win2K Adv. Server installed on C:). The server also has a
> mapping to a NAS device using the latest protocols that trick the
> system into thinking the map is actually a local SCSII drive. That's
> drive X:.
> This server is used only for SQL, and contains an OLTP database that
> sees a lot of use and is pretty heavily indexed.
> I am toying with the idea of centralizing my data storage on the NAS
> (data center network segment is 1-gigabit ethernet). So I was
> thinking about putting my primary data file on the NAS (drive X:) and
> keeping all tables there, creating a secondary data file on local
> RAID-5 (drive D:) and putting all non-clustered indexes there, as well
> as keeping the tempdb there and specifying the sort in tempdb option.
> Log files would also remain on D:.
> If anyone can suggest a better scenario given the above setup - I'd
> love to hear it. Much appreciated.
> Alexey Aksyonenko|||Thanks David. Actually, they came out with this new protocol that as I
said makes the system think that the mapping is a local drive. But I
will definitely keep this in mind. Any suggestions on the actual
physical design portion, NAS aside?

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||"Alexey Aksyonenko" <alexey.aksyonenko@.coanetwork.com> wrote in message
news:3f7469fb$0$62077$75868355@.news.frii.net...
> Thanks David. Actually, they came out with this new protocol that as I
> said makes the system think that the mapping is a local drive. But I
> will definitely keep this in mind. Any suggestions on the actual
> physical design portion, NAS aside?

Ignoring the NAS the answer is: Doesn't really matter. Since your C and D
drives are logical partitions of a single physical RAID 5 disk, it won't
make much of a difference how you lay out the files.

>
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!