Friday, March 9, 2012
Looking to make a database of fingerprints
Are you sure you know how much work this is? I haven't worked with fingerprints, but you cannot assume that the fingerprints are equal, so you have to create some kind of a pattern matching algorithm, or get one. My best advice (if you only are going to use this solution internally) is to leave it, and buy a complete solution doing this. I call that would be way cheeper (but not so interresting of course).|||for instance these guys: http://www.east-shore.com/|||if u r looking for the logic of fingerprint matching it is complex and at the same time standard. and this is not the place wher u will get it. however, if u r interested in saving fingerprints (or any image for that matter) in sql server u can use "image" type field.|||I am not looking to recall things from the database using fingerprints, just looking to input them. The central office has that algorithm so if they need to use it, All I have to do is give them access to the database|||for saving fingerprints, as i said, u can create a table with a image type field. access to sql server is normally available to all machines in network having sql client installed. of course there are many finer aspects of access to database which can be considered at a later stage.|||Of course you can use a image field, but that does not solve the case of looking up the images later on. When you have to check the fingerprint, you'll then have to return ALL the fingerprints to the client, for the client to decide if it matches. Matching fingerprints may seem easy at first, but i'ts actually a rather complex task. So, I rest my case. Leave it to someone who's done it before, unless you have both time and money to spend.
Wednesday, March 7, 2012
Looking for SQL Server Online Training Course
I am trying to find a good online course for SQL Server 2000
administration. I am already taking a online course from a company my
employer has contract with. But the course is so bad, I am not learning
anything :<
I would really appreciate it if you can give me some recommendation
about what is a good place to take this course from.
Thank you in advance!
Eddy,You'd be better off studying some good books on SQL Server than trying to
find a course. Most courses are horribly inadequate. My advice, stay away
from books by MS Press at first. Stick to O'Reily and Wrox Press.
<eddiekwang@.hotmail.com> wrote in message
news:1111454975.678813.51350@.l41g2000cwc.googlegro ups.com...
> Hi,
> I am trying to find a good online course for SQL Server 2000
> administration. I am already taking a online course from a company my
> employer has contract with. But the course is so bad, I am not learning
> anything :<
> I would really appreciate it if you can give me some recommendation
> about what is a good place to take this course from.
> Thank you in advance!
> Eddy,
Looking for some wisdom
performance issues that I believe are linked to database design and
programming. I have a couple of questions that I hope can be
answered.
The database only has 26 gig of real data the rest are indexes.
Is this normal? I know the extra indexes cause performance problems
with inserts,updates and deletes.
The databse has huge stored procedures many pages long. Is it the
right thing to do putting all the work onto the sql server itself?
Shouldn't these long procedures be handled in the middle tier using vb
or c?
Triggers using inserted and deleted tables. These triggers are used
on tansactions for inserts udates and deletes on the database. From
what I have seen monitoring the server these triggers run twice as
long as the update delete or insert and since the trigger is fired
during a transaction I would guess that the transaction is not
commited until the trigger is done. Would I be correct in assuming
this?
Thats all I have for right now any help would be great. If you had
any documention to back this up would help alot. I seem to be in a
battle with the programming group on this whole performance issue. By
the way the server hardware is dual 2 gig xeons 4 gig memory 165 gig
hd space on raid 5.
Jim
jmaddox@.oaktreesys.comi have frequently seen databases where there was as much
space used for indexes as for data
i think the highest index to data size ratio i saw was
~2X, and i felt that one had unnecessary indexes. good
table design is also part of index minimization.
btw, each index adds between 15-40% overhead to the base
cost of modifying a row, depending on a few factors (SQL
Server Connections conference, Oct 2003, SDB417)
i like to maintain a script that executes each sp once
(plus consideration for multiple code paths)
then i can drop indexes one by one to look for table scans.
a big problem with very long sp's is recompiles, an insert
into a temp table or other factor could trigger a
recompile of the entire sp (fixed in Yukon), so if can't
fix the cause of the recompile, breaking a big proc into
smaller procs can be helpful
i prefer using sprocs and not triggers. triggers are good
if you are using sql statements, so you need only one
network roundtrip to handle the complete transaction.
i believe triggers to be less efficient in multi-row
operations, where the trigger may fire once per row,
>--Original Message--
>My company has a sql database with a 100 gig database.
There are many
>performance issues that I believe are linked to database
design and
>programming. I have a couple of questions that I hope
can be
>answered.
>The database only has 26 gig of real data the rest are
indexes.
>Is this normal? I know the extra indexes cause
performance problems
>with inserts,updates and deletes.
>The databse has huge stored procedures many pages long.
Is it the
>right thing to do putting all the work onto the sql
server itself?
>Shouldn't these long procedures be handled in the middle
tier using vb
>or c?
>Triggers using inserted and deleted tables. These
triggers are used
>on tansactions for inserts udates and deletes on the
database. From
>what I have seen monitoring the server these triggers run
twice as
>long as the update delete or insert and since the trigger
is fired
>during a transaction I would guess that the transaction
is not
>commited until the trigger is done. Would I be correct
in assuming
>this?
>Thats all I have for right now any help would be great.
If you had
>any documention to back this up would help alot. I seem
to be in a
>battle with the programming group on this whole
performance issue. By
>the way the server hardware is dual 2 gig xeons 4 gig
memory 165 gig
>hd space on raid 5.
>Jim
>jmaddox@.oaktreesys.com
>.
>|||I don't think triggers fire once per row in SQL Server since there is not
ROW level triggers like in Oracle.
They are set based only. But beware of triggers since they make it harder to
follow the flow of what is happening.
If you are running big inserts, deletes or updates (multiple rows per
command) and use the inserted, deleted table in joins in the trigger I'm not
sure the performance will be incredible. You should probably try to see if
sp's with all the logic of what the triggers are doing could be created and
called instead of relying on the trigger processing.
Triggers are part of your transaction. So if these commands are long lasting
and touch lots of data you can get into blocking problems. Which obviously
doesn't help performance. In a sp you could control the transactions
explicitly and commit (or rollback) at more than one point.
As for having sp's or a middle tier in vb or other :
I prefer having SQL code located on SQL Server. This way it's easy to
isolate and change SQL code that's not optimal. You can see that is what MS
thinks also in Yukon by having SQL Server host .NET so we can create more
complex procs.
The middle tier can perhaps generate the commands used to access the DB but
it should put it in a proc and use that next time around. This way the
middle tier can call one proc to return multiple datasets instead of
executing each command separately occurring a round trip each time. Also
your middle tier can perhaps cache some amount of data as to not always hit
the DB.
As for your index check this site out :
http://www.sql-server-performance.com/optimizing_indexes.asp
The above site holds a great deal of info you should probably browse it and
you will surely find a load of answers to your questions.
Chris.
"joe chang" <anonymous@.discussions.microsoft.com> wrote in message
news:0c7201c3be83$258e93c0$a001280a@.phx.gbl...
> i have frequently seen databases where there was as much
> space used for indexes as for data
> i think the highest index to data size ratio i saw was
> ~2X, and i felt that one had unnecessary indexes. good
> table design is also part of index minimization.
> btw, each index adds between 15-40% overhead to the base
> cost of modifying a row, depending on a few factors (SQL
> Server Connections conference, Oct 2003, SDB417)
> i like to maintain a script that executes each sp once
> (plus consideration for multiple code paths)
> then i can drop indexes one by one to look for table scans.
> a big problem with very long sp's is recompiles, an insert
> into a temp table or other factor could trigger a
> recompile of the entire sp (fixed in Yukon), so if can't
> fix the cause of the recompile, breaking a big proc into
> smaller procs can be helpful
> i prefer using sprocs and not triggers. triggers are good
> if you are using sql statements, so you need only one
> network roundtrip to handle the complete transaction.
> i believe triggers to be less efficient in multi-row
> operations, where the trigger may fire once per row,
> >--Original Message--
> >My company has a sql database with a 100 gig database.
> There are many
> >performance issues that I believe are linked to database
> design and
> >programming. I have a couple of questions that I hope
> can be
> >answered.
> >
> >The database only has 26 gig of real data the rest are
> indexes.
> >Is this normal? I know the extra indexes cause
> performance problems
> >with inserts,updates and deletes.
> >
> >The databse has huge stored procedures many pages long.
> Is it the
> >right thing to do putting all the work onto the sql
> server itself?
> >Shouldn't these long procedures be handled in the middle
> tier using vb
> >or c?
> >
> >Triggers using inserted and deleted tables. These
> triggers are used
> >on tansactions for inserts udates and deletes on the
> database. From
> >what I have seen monitoring the server these triggers run
> twice as
> >long as the update delete or insert and since the trigger
> is fired
> >during a transaction I would guess that the transaction
> is not
> >commited until the trigger is done. Would I be correct
> in assuming
> >this?
> >
> >Thats all I have for right now any help would be great.
> If you had
> >any documention to back this up would help alot. I seem
> to be in a
> >battle with the programming group on this whole
> performance issue. By
> >the way the server hardware is dual 2 gig xeons 4 gig
> memory 165 gig
> >hd space on raid 5.
> >
> >Jim
> >jmaddox@.oaktreesys.com
> >.
> >
Saturday, February 25, 2012
Looking for options to fix a poorly implemented solution
Server database designed by a third party software company, which also
wrote the client software accompanying the database. The client
software is crap, the data model sucks rocks, and I'm stuck with it.
So in no way can I modify client code, or redesign any element of
relational model. I've been able to modify stored procedures and
triggers for performance, perform regular maintenance tasks, and
upgrade all upgradable elements, but that's about it. On top of that,
people developed an internal method of using the datbase that is
causing it to grow way too fast. Basically, there is one table with a
BLOB field that people are using to store Word documents containing
scanned images (part of their efforts to go to a paperless system).
Documents are scanned only when there is no other way to get the
information into the system (client signatures, legal documents from
courts, etc). And I've been charged with getting the size of the
database down. Yay.
Brainstorming for solutions, I was wondering if there is any possible
way to compress BLOB fields in a way that is completely transparent to
the client. I can't think of a way, so I have little hope for that
idea. Another is to go in and make sure all of the images are
compressed. That will only yield marginal results. Finally, going in
and replacing all of the scanned documents with documents that point
to a UNC path with all of the extracted documents is a solution that
will required quite a bit of work, but is possible. (If it were up to
me, I'd just say to heck with it and keep some documents in a paper
system). Any thought about what I could do?
Thanks!When I was at Sprint, the Place Where Consultants Go To Be Punished, our
department got into a document archiving frenzy. Ended up storing all
documents into PDFs, storing them on the server, with only a link to the
document in the database.
- Wm
"AAAWalrus" <aaawalrus@.yahoo.com> wrote in message
news:8b266bc2.0312101204.32a99d74@.posting.google.com...
> So, I've started a new job recently where I am doing work on a SQL
> Server database designed by a third party software company, which also
> wrote the client software accompanying the database. The client
> software is crap, the data model sucks rocks, and I'm stuck with it.
> So in no way can I modify client code, or redesign any element of
> relational model. I've been able to modify stored procedures and
> triggers for performance, perform regular maintenance tasks, and
> upgrade all upgradable elements, but that's about it. On top of that,
> people developed an internal method of using the datbase that is
> causing it to grow way too fast. Basically, there is one table with a
> BLOB field that people are using to store Word documents containing
> scanned images (part of their efforts to go to a paperless system).
> Documents are scanned only when there is no other way to get the
> information into the system (client signatures, legal documents from
> courts, etc). And I've been charged with getting the size of the
> database down. Yay.
> Brainstorming for solutions, I was wondering if there is any possible
> way to compress BLOB fields in a way that is completely transparent to
> the client. I can't think of a way, so I have little hope for that
> idea. Another is to go in and make sure all of the images are
> compressed. That will only yield marginal results. Finally, going in
> and replacing all of the scanned documents with documents that point
> to a UNC path with all of the extracted documents is a solution that
> will required quite a bit of work, but is possible. (If it were up to
> me, I'd just say to heck with it and keep some documents in a paper
> system). Any thought about what I could do?
> Thanks!