Monday, March 26, 2012
Looping through sysDatabases to perform maintenance
I need to generate some metrics on the size of transaction logs on all
databases for a given server. I was thinking I could loop through
sysDatabases to get the name of every database. However, when I try to do
something like:
...
While @.@.Fetch_Status = 0 Begin
Use @.cDBName
--Do some stuff here...
Fetch next From curDBList Into @.cDBName
End
the server throws a "Incorrect syntax near '@.cDBName'.". Is there a way
where I can dynamically specifiy the name of the DB with the USE command?
thanks in advance.mystical potato (mysticalpotato@.discussions.microsoft.com) writes:
> I need to generate some metrics on the size of transaction logs on all
> databases for a given server. I was thinking I could loop through
> sysDatabases to get the name of every database. However, when I try to do
> something like:
> ...
> While @.@.Fetch_Status = 0 Begin
> Use @.cDBName
> --Do some stuff here...
> Fetch next From curDBList Into @.cDBName
> End
> the server throws a "Incorrect syntax near '@.cDBName'.". Is there a way
> where I can dynamically specifiy the name of the DB with the USE command?
The easiest is probably to to use sp_MSforeachdb. Here is a fairly
stupid example:
sp_MSforeachdb 'SELECT ''?'', COUNT(*) FROM sysobjects'
Note that sp_MSforeachdb is not a documented function, and thus not
supported. Nevertheless, it's fairly popular.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Hi
A different approach may be to use the Perfmon counters to track this,
especially if you wish to pinpoint a sudden increase. A step further along
the system monitoring approach would be to look into using MOM to gather and
report this for you.
John
"mystical potato" <mysticalpotato@.discussions.microsoft.com> wrote in
message news:D5DE41B3-6065-422E-8C02-DFFE9167C213@.microsoft.com...
> from sql 2000:
> I need to generate some metrics on the size of transaction logs on all
> databases for a given server. I was thinking I could loop through
> sysDatabases to get the name of every database. However, when I try to do
> something like:
> ...
> While @.@.Fetch_Status = 0 Begin
> Use @.cDBName
> --Do some stuff here...
> Fetch next From curDBList Into @.cDBName
> End
> the server throws a "Incorrect syntax near '@.cDBName'.". Is there a way
> where I can dynamically specifiy the name of the DB with the USE command?
> thanks in advance.|||Try using:
dbcc sqlperf(logspace)
AMB
"mystical potato" wrote:
> from sql 2000:
> I need to generate some metrics on the size of transaction logs on all
> databases for a given server. I was thinking I could loop through
> sysDatabases to get the name of every database. However, when I try to do
> something like:
> ...
> While @.@.Fetch_Status = 0 Begin
> Use @.cDBName
> --Do some stuff here...
> Fetch next From curDBList Into @.cDBName
> End
> the server throws a "Incorrect syntax near '@.cDBName'.". Is there a way
> where I can dynamically specifiy the name of the DB with the USE command?
> thanks in advance.
Friday, March 23, 2012
Looping In SQL 2000 (Can it be a nested loop)
For example:
Every day we buy a list of sales leads, it ranges in size from 50 -
100 records.
We have a team of sales people that also can range from 5 - 8 people.
I need to take the new records and divide them evenly among the sales
people.
If i get 50 records, and have 5 sales people, then each sales person
gets 10 leads.
--
So, im guessing that I may need to have a nested loop inside this. I
have tried it several different ways, but cant seem to get it quite
right.
DECLARE @.TotalRecordCount int, @.TotalSalesPeopleCount int,
@.AmountForEach int, @.LooperSalesPeoplerecords int,
@.LooperNewSalesLeadsRecords int, @.SalesPersonID int
SELECT @.TotalSalesPeopleCount = COUNT(UserId)
FROM SalesPeople
WHERE Active = 1
--
SELECT @.TotalRecordCount = COUNT(*)
FROM NewSalesLeads
--
SELECT @.AmountForEach = (@.TotalRecordCount/@.TotalSalesPeopleCount)
--
SELECT @.LooperSalesPeoplerecords = 1
SELECT @.LooperNewSalesLeadsRecords = 1
--
WHILE @.LooperSalesPeoplerecords <= @.TotalSalesPeopleCount
BEGIN
WHILE @.LooperNewSalesLeadsRecords <= @.TotalRecordCount
BEGIN
SELECT @.SalesPersonID = (SELECT UserID
FROM SalesPeople
WHERE UniqueId = @.LooperSalesPeoplerecords)
SELECT @.LooperSalesPeoplerecords =
(@.LooperSalesPeoplerecords + 1)
UPDATE SalesLeads
SET SalesPerson_ID = @.SalesPersonID
WHERE UNIQUEID = @.LooperSalesPeoplerecords
SELECT @.LooperSalesPeoplerecords =
(@.LooperSalesPeoplerecords + 1)
END
END
--
Table structures
CREATE TABLE [dbo].[SalesPeople] (
[SalesPerson_ID] [int] NOT NULL ,
[FirstName] [varchar](20)NOT NULL
) ON [PRIMARY]
--
INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (26, 'Bill')
INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (28, 'Bob')
INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (37,
'Chris')
---------------
CREATE TABLE [dbo].[SalesLeads] (
[SalesLeadID] [int]NOT NULL ,
[SalesPerson_ID] [int]NOT NULL
) ON [PRIMARY]
--
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1001,0)
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1002,0)
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1003,0)
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1004,0)
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1005,0)
INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1006,0)
---------------
So in this case, all 3 salespeople should receive 2 salesleads each.
I dummied this down quite a bit. It actually ends up being more like
15 sales people, and about 400,000 sales leads. But it should work on
any level.
Thanks for any help you might shed on this.Hi
Something like this may help. It assumes the sales IDs are sequential, and
does not worry if there are a an exact division.
UPDATE L
SET SalesPerson_ID = M.SalesPerson_ID
FROM SalesLeads L JOIN
( SELECT P.[FirstName], P.[SalesPerson_ID] ,
( SELECT COUNT(*) FROM [dbo].[SalesPeople] S WHERE S.[SalesPerson_ID] <
P.[SalesPerson_ID] ) AS Rank
FROM [dbo].[SalesPeople] P ) M
ON L.SalesLeadID % ( SELECT COUNT(*) FROM [dbo].[SalesPeople] ) = M.Rank
John
"Dave" <funkdm1@.yahoo.com> wrote in message
news:f5174e0f.0406180540.2afaed20@.posting.google.c om...
> I have to automate a process that assigns sales leads to sales people.
> For example:
> Every day we buy a list of sales leads, it ranges in size from 50 -
> 100 records.
> We have a team of sales people that also can range from 5 - 8 people.
> I need to take the new records and divide them evenly among the sales
> people.
> If i get 50 records, and have 5 sales people, then each sales person
> gets 10 leads.
> --
> So, im guessing that I may need to have a nested loop inside this. I
> have tried it several different ways, but cant seem to get it quite
> right.
> DECLARE @.TotalRecordCount int, @.TotalSalesPeopleCount int,
> @.AmountForEach int, @.LooperSalesPeoplerecords int,
> @.LooperNewSalesLeadsRecords int, @.SalesPersonID int
> SELECT @.TotalSalesPeopleCount = COUNT(UserId)
> FROM SalesPeople
> WHERE Active = 1
> --
> SELECT @.TotalRecordCount = COUNT(*)
> FROM NewSalesLeads
> --
> SELECT @.AmountForEach = (@.TotalRecordCount/@.TotalSalesPeopleCount)
> --
> SELECT @.LooperSalesPeoplerecords = 1
> SELECT @.LooperNewSalesLeadsRecords = 1
> --
> WHILE @.LooperSalesPeoplerecords <= @.TotalSalesPeopleCount
> BEGIN
> WHILE @.LooperNewSalesLeadsRecords <= @.TotalRecordCount
> BEGIN
> SELECT @.SalesPersonID = (SELECT UserID
> FROM SalesPeople
> WHERE UniqueId = @.LooperSalesPeoplerecords)
> SELECT @.LooperSalesPeoplerecords =
> (@.LooperSalesPeoplerecords + 1)
> UPDATE SalesLeads
> SET SalesPerson_ID = @.SalesPersonID
> WHERE UNIQUEID = @.LooperSalesPeoplerecords
> SELECT @.LooperSalesPeoplerecords =
> (@.LooperSalesPeoplerecords + 1)
> END
> END
> --
> Table structures
> CREATE TABLE [dbo].[SalesPeople] (
> [SalesPerson_ID] [int] NOT NULL ,
> [FirstName] [varchar](20)NOT NULL
> ) ON [PRIMARY]
> --
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (26, 'Bill')
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (28, 'Bob')
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (37,
> 'Chris')
> ---------------
> CREATE TABLE [dbo].[SalesLeads] (
> [SalesLeadID] [int]NOT NULL ,
> [SalesPerson_ID] [int]NOT NULL
> ) ON [PRIMARY]
> --
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1001,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1002,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1003,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1004,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1005,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1006,0)
> ---------------
> So in this case, all 3 salespeople should receive 2 salesleads each.
> I dummied this down quite a bit. It actually ends up being more like
> 15 sales people, and about 400,000 sales leads. But it should work on
> any level.
> Thanks for any help you might shed on this.|||John,
Yeah, we arent that concerned about having an exactly even
distribution so uneven divisors are not a problem.
And, unfortunately the sales people arent in sequential order. WE have
hundreds of sales people that are assigned to various campaigns at any
given time. So this particular campaign may change on a daily basis.
Meaning that the salespersonid is always non sequential for this
campaign.
Thanks anyway.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message news:<_mDAc.3861$Is4.34270449@.news-text.cableinet.net>...
> Hi
> Something like this may help. It assumes the sales IDs are sequential, and
> does not worry if there are a an exact division.
> UPDATE L
> SET SalesPerson_ID = M.SalesPerson_ID
> FROM SalesLeads L JOIN
> ( SELECT P.[FirstName], P.[SalesPerson_ID] ,
> ( SELECT COUNT(*) FROM [dbo].[SalesPeople] S WHERE S.[SalesPerson_ID] <
> P.[SalesPerson_ID] ) AS Rank
> FROM [dbo].[SalesPeople] P ) M
> ON L.SalesLeadID % ( SELECT COUNT(*) FROM [dbo].[SalesPeople] ) = M.Rank
> John
> "Dave" <funkdm1@.yahoo.com> wrote in message
> news:f5174e0f.0406180540.2afaed20@.posting.google.c om...
> > I have to automate a process that assigns sales leads to sales people.
> > For example:
> > Every day we buy a list of sales leads, it ranges in size from 50 -
> > 100 records.
> > We have a team of sales people that also can range from 5 - 8 people.
> > I need to take the new records and divide them evenly among the sales
> > people.
> > If i get 50 records, and have 5 sales people, then each sales person
> > gets 10 leads.
> > --
> > So, im guessing that I may need to have a nested loop inside this. I
> > have tried it several different ways, but cant seem to get it quite
> > right.
> > DECLARE @.TotalRecordCount int, @.TotalSalesPeopleCount int,
> > @.AmountForEach int, @.LooperSalesPeoplerecords int,
> > @.LooperNewSalesLeadsRecords int, @.SalesPersonID int
> > SELECT @.TotalSalesPeopleCount = COUNT(UserId)
> > FROM SalesPeople
> > WHERE Active = 1
> > --
> > SELECT @.TotalRecordCount = COUNT(*)
> > FROM NewSalesLeads
> > --
> > SELECT @.AmountForEach = (@.TotalRecordCount/@.TotalSalesPeopleCount)
> > --
> > SELECT @.LooperSalesPeoplerecords = 1
> > SELECT @.LooperNewSalesLeadsRecords = 1
> > --
> > WHILE @.LooperSalesPeoplerecords <= @.TotalSalesPeopleCount
> > BEGIN
> > WHILE @.LooperNewSalesLeadsRecords <= @.TotalRecordCount
> > BEGIN
> > SELECT @.SalesPersonID = (SELECT UserID
> > FROM SalesPeople
> > WHERE UniqueId = @.LooperSalesPeoplerecords)
> > SELECT @.LooperSalesPeoplerecords =
> > (@.LooperSalesPeoplerecords + 1)
> > UPDATE SalesLeads
> > SET SalesPerson_ID = @.SalesPersonID
> > WHERE UNIQUEID = @.LooperSalesPeoplerecords
> > SELECT @.LooperSalesPeoplerecords =
> > (@.LooperSalesPeoplerecords + 1)
> > END
> > END
> > --
> > Table structures
> > CREATE TABLE [dbo].[SalesPeople] (
> > [SalesPerson_ID] [int] NOT NULL ,
> > [FirstName] [varchar](20)NOT NULL
> > ) ON [PRIMARY]
> > --
> > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (26, 'Bill')
> > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (28, 'Bob')
> > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (37,
> > 'Chris')
> > ---------------
> > CREATE TABLE [dbo].[SalesLeads] (
> > [SalesLeadID] [int]NOT NULL ,
> > [SalesPerson_ID] [int]NOT NULL
> > ) ON [PRIMARY]
> > --
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1001,0)
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1002,0)
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1003,0)
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1004,0)
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1005,0)
> > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1006,0)
> > ---------------
> > So in this case, all 3 salespeople should receive 2 salesleads each.
> > I dummied this down quite a bit. It actually ends up being more like
> > 15 sales people, and about 400,000 sales leads. But it should work on
> > any level.
> > Thanks for any help you might shed on this.|||Hi
The solution does not rely on salesperson id being sequential as it ranks
them separately. If the salesleads id are reasonably sequential then you
should be ok.
John
"Dave" <funkdm1@.yahoo.com> wrote in message
news:f5174e0f.0406181026.4c69acff@.posting.google.c om...
> John,
> Yeah, we arent that concerned about having an exactly even
> distribution so uneven divisors are not a problem.
> And, unfortunately the sales people arent in sequential order. WE have
> hundreds of sales people that are assigned to various campaigns at any
> given time. So this particular campaign may change on a daily basis.
> Meaning that the salespersonid is always non sequential for this
> campaign.
> Thanks anyway.
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:<_mDAc.3861$Is4.34270449@.news-text.cableinet.net>...
> > Hi
> > Something like this may help. It assumes the sales IDs are sequential,
and
> > does not worry if there are a an exact division.
> > UPDATE L
> > SET SalesPerson_ID = M.SalesPerson_ID
> > FROM SalesLeads L JOIN
> > ( SELECT P.[FirstName], P.[SalesPerson_ID] ,
> > ( SELECT COUNT(*) FROM [dbo].[SalesPeople] S WHERE S.[SalesPerson_ID] <
> > P.[SalesPerson_ID] ) AS Rank
> > FROM [dbo].[SalesPeople] P ) M
> > ON L.SalesLeadID % ( SELECT COUNT(*) FROM [dbo].[SalesPeople] ) = M.Rank
> > John
> > "Dave" <funkdm1@.yahoo.com> wrote in message
> > news:f5174e0f.0406180540.2afaed20@.posting.google.c om...
> > > I have to automate a process that assigns sales leads to sales people.
> > > > For example:
> > > Every day we buy a list of sales leads, it ranges in size from 50 -
> > > 100 records.
> > > We have a team of sales people that also can range from 5 - 8 people.
> > > > I need to take the new records and divide them evenly among the sales
> > > people.
> > > > If i get 50 records, and have 5 sales people, then each sales person
> > > gets 10 leads.
> > > --
> > > So, im guessing that I may need to have a nested loop inside this. I
> > > have tried it several different ways, but cant seem to get it quite
> > > right.
> > > > DECLARE @.TotalRecordCount int, @.TotalSalesPeopleCount int,
> > > @.AmountForEach int, @.LooperSalesPeoplerecords int,
> > > @.LooperNewSalesLeadsRecords int, @.SalesPersonID int
> > > > SELECT @.TotalSalesPeopleCount = COUNT(UserId)
> > > FROM SalesPeople
> > > WHERE Active = 1
> > > --
> > > SELECT @.TotalRecordCount = COUNT(*)
> > > FROM NewSalesLeads
> > > --
> > > SELECT @.AmountForEach = (@.TotalRecordCount/@.TotalSalesPeopleCount)
> > > --
> > > SELECT @.LooperSalesPeoplerecords = 1
> > > SELECT @.LooperNewSalesLeadsRecords = 1
> > > --
> > > WHILE @.LooperSalesPeoplerecords <= @.TotalSalesPeopleCount
> > > BEGIN
> > > WHILE @.LooperNewSalesLeadsRecords <= @.TotalRecordCount
> > > BEGIN
> > > SELECT @.SalesPersonID = (SELECT UserID
> > > FROM SalesPeople
> > > WHERE UniqueId = @.LooperSalesPeoplerecords)
> > > > SELECT @.LooperSalesPeoplerecords =
> > > (@.LooperSalesPeoplerecords + 1)
> > > > UPDATE SalesLeads
> > > SET SalesPerson_ID = @.SalesPersonID
> > > WHERE UNIQUEID = @.LooperSalesPeoplerecords
> > > > SELECT @.LooperSalesPeoplerecords =
> > > (@.LooperSalesPeoplerecords + 1)
> > > END
> > > END
> > > > --
> > > Table structures
> > > > CREATE TABLE [dbo].[SalesPeople] (
> > > [SalesPerson_ID] [int] NOT NULL ,
> > > [FirstName] [varchar](20)NOT NULL
> > > ) ON [PRIMARY]
> > > --
> > > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (26, 'Bill')
> > > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (28, 'Bob')
> > > INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (37,
> > > 'Chris')
> > > ---------------
> > > CREATE TABLE [dbo].[SalesLeads] (
> > > [SalesLeadID] [int]NOT NULL ,
> > > [SalesPerson_ID] [int]NOT NULL
> > > ) ON [PRIMARY]
> > > --
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1001,0)
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1002,0)
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1003,0)
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1004,0)
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1005,0)
> > > INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1006,0)
> > > ---------------
> > > > So in this case, all 3 salespeople should receive 2 salesleads each.
> > > > I dummied this down quite a bit. It actually ends up being more like
> > > 15 sales people, and about 400,000 sales leads. But it should work on
> > > any level.
> > > > Thanks for any help you might shed on this.|||I think meant to have something like this, with the keys shown and
some extra columns addded:
CREATE TABLE SalesPeople
(salesperson_id INTEGER NOT NULL PRIMARY KEY,
firstname VARCHAR(20)NOT NULL,
seq_nbr INTEGER NOT NULL,
.. );
Run this guy when then table is changed.
UPDATE SalesPeople
SET seq_nbr
= (SELECT COUNT(*)
FROM SalesPeople AS P1
WHERE P1.saleperson_id <= SalesPeople.salesperson_id)
Now a table for the leads as leads and not what you had:
CREATE TABLE SalesLeads
(saleslead_id INTEGER NOT NULL PRIMARY KEY,
grp_nbr INTEGER,
..);
First group the leads based on the number of sales people:
UPDATE SalesLeads
SET grp_nbr
= (SELECT COUNT(*)
FROM SalesLeads AS L1
WHERE L1.saleslead_id <= SalesLeads.saleslead_id)
% (SELECT COUNT(*) FROM SalesPeople);
Now look at a VIEW to get each sales person the group of leads they
are to work.
CREATE VIEW LeadAssignments (saleslead_id, salesperson_id)
AS SELECT L1.saleslead_id, P1.salesperson_id
FROM Salesleads AS L1, SalesPeople AS P1
WHERE L1.grp_nbr = P1.seq_nbr;|||I think I understand what you are saying here. However, This thing has
to be comletely automated and just do the update everynight.
I do appreciate the suggestions.
I have to think that this is used fairly often, and cant be this
difficult to find some folks that have written something similar.
Thanks,
Dave
jcelko212@.earthlink.net (--CELKO--) wrote in message news:<18c7b3c2.0406181437.187bf80a@.posting.google.com>...
> I think meant to have something like this, with the keys shown and
> some extra columns addded:
> CREATE TABLE SalesPeople
> (salesperson_id INTEGER NOT NULL PRIMARY KEY,
> firstname VARCHAR(20)NOT NULL,
> seq_nbr INTEGER NOT NULL,
> .. );
> Run this guy when then table is changed.
> UPDATE SalesPeople
> SET seq_nbr
> = (SELECT COUNT(*)
> FROM SalesPeople AS P1
> WHERE P1.saleperson_id <= SalesPeople.salesperson_id)
> Now a table for the leads as leads and not what you had:
> CREATE TABLE SalesLeads
> (saleslead_id INTEGER NOT NULL PRIMARY KEY,
> grp_nbr INTEGER,
> ..);
> First group the leads based on the number of sales people:
> UPDATE SalesLeads
> SET grp_nbr
> = (SELECT COUNT(*)
> FROM SalesLeads AS L1
> WHERE L1.saleslead_id <= SalesLeads.saleslead_id)
> % (SELECT COUNT(*) FROM SalesPeople);
> Now look at a VIEW to get each sales person the group of leads they
> are to work.
> CREATE VIEW LeadAssignments (saleslead_id, salesperson_id)
> AS SELECT L1.saleslead_id, P1.salesperson_id
> FROM Salesleads AS L1, SalesPeople AS P1
> WHERE L1.grp_nbr = P1.seq_nbr;|||How about something like this:
----------------------
Create Table #Leads
(
LeadID Int Identity Not Null,
SalesID Int,
UniqueId Int -- Bad name
)
Create Table #SalesPeople
(
SalesID Int Identity (0, 1) Not Null,
UserID Char(30) -- type?
)
DECLARE @.SalesPeopleCount int
-- Put sales people in the temp table.
Insert Into #SalesPeople (UserID)
Select UserID From SalesPeople Where Active = 1
-- How many were there?
Set @.SalesPeopleCount = @.@.RowCount
-- Put leads in the temp table.
Insert Into #Leads (UniqueId)
Select UniqueId From NewSalesLeads
-- Match sales people to leads
Update #Leads Set SalesID = LeadID % @.SalesPeopleCount
-- Save the matches.
UPDATE SalesLeads
SET SalesPerson_ID = UserID
From
SalesLeads
Join #Leads On SalesLeads.UniqueId = #Leads.UniqueId
Join #SalesPeople On #Leads.SalesID = #SalesPeople.SalesID
----------------------
Temporary IDs will always be sequential. The only problem with
this is that lower sales IDs will consistently get more leads.
You could fix that by adding something variable to LeadID before
performing the mod.
If you're going to use ID columns, you need to treat them as keys.
Joe Celko was using something like this as a bad example a few
weeks ago, trying to convince me that ID columns are a bad idea.
Used halfway like this, I would agree.
Bill
Dave wrote:
> I have to automate a process that assigns sales leads to sales people.
> For example:
> Every day we buy a list of sales leads, it ranges in size from 50 -
> 100 records.
> We have a team of sales people that also can range from 5 - 8 people.
> I need to take the new records and divide them evenly among the sales
> people.
> If i get 50 records, and have 5 sales people, then each sales person
> gets 10 leads.
> --
> So, im guessing that I may need to have a nested loop inside this. I
> have tried it several different ways, but cant seem to get it quite
> right.
> DECLARE @.TotalRecordCount int, @.TotalSalesPeopleCount int,
> @.AmountForEach int, @.LooperSalesPeoplerecords int,
> @.LooperNewSalesLeadsRecords int, @.SalesPersonID int
> SELECT @.TotalSalesPeopleCount = COUNT(UserId)
> FROM SalesPeople
> WHERE Active = 1
> --
> SELECT @.TotalRecordCount = COUNT(*)
> FROM NewSalesLeads
> --
> SELECT @.AmountForEach = (@.TotalRecordCount/@.TotalSalesPeopleCount)
> --
> SELECT @.LooperSalesPeoplerecords = 1
> SELECT @.LooperNewSalesLeadsRecords = 1
> --
> WHILE @.LooperSalesPeoplerecords <= @.TotalSalesPeopleCount
> BEGIN
> WHILE @.LooperNewSalesLeadsRecords <= @.TotalRecordCount
> BEGIN
> SELECT @.SalesPersonID = (SELECT UserID
> FROM SalesPeople
> WHERE UniqueId = @.LooperSalesPeoplerecords)
> SELECT @.LooperSalesPeoplerecords =
> (@.LooperSalesPeoplerecords + 1)
> UPDATE SalesLeads
> SET SalesPerson_ID = @.SalesPersonID
> WHERE UNIQUEID = @.LooperSalesPeoplerecords
> SELECT @.LooperSalesPeoplerecords =
> (@.LooperSalesPeoplerecords + 1)
> END
> END
> --
> Table structures
> CREATE TABLE [dbo].[SalesPeople] (
> [SalesPerson_ID] [int] NOT NULL ,
> [FirstName] [varchar](20)NOT NULL
> ) ON [PRIMARY]
> --
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (26, 'Bill')
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (28, 'Bob')
> INSERT INTO SalesPeople (SalesPerson_ID,FirstName) VALUES (37,
> 'Chris')
> ---------------
> CREATE TABLE [dbo].[SalesLeads] (
> [SalesLeadID] [int]NOT NULL ,
> [SalesPerson_ID] [int]NOT NULL
> ) ON [PRIMARY]
> --
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1001,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1002,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1003,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1004,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1005,0)
> INSERT INTO SalesLeads (SalesLeadID,SalesPerson_ID) VALUES (1006,0)
> ---------------
> So in this case, all 3 salespeople should receive 2 salesleads each.
> I dummied this down quite a bit. It actually ends up being more like
> 15 sales people, and about 400,000 sales leads. But it should work on
> any level.
> Thanks for any help you might shed on this.
Wednesday, March 7, 2012
Looking for suggestion of how to manage a DB file when it getting too big.
Hi all,
At our facility, we have a > 350 gigs database and the DB is breaking into 2 files. One file is our Image table with the size of 300 gigs. The other file contains all of the other tables. The image file is getting bigger and bigger everyday, I am worry that eventually when a data file getting too big it could lead into problem. - DB fragmentatio, hard to manage etc
My question is anyone has run into this problem, and how do you solve it?
I am open for any suggestion.
BTW: I am using SQL 2000.
Thanks in advance.
Use filegroups, Assign the frequently modified tables in one filegroup while those that don't get modified frequently be assigned in another filegroup. When you create filegroups, you also modify your backup strategies.|||Since your basic problem is with growth of your images, you need to come up with a mechanism for managing those.
If this is SQL Server 2005, then I would implement partitioning against that table on your identifier. That will allow you to split the table across multiple filegroups and files. If this is SQL Server 2000, then you can add multiple files to the filegroup that this table resides on (the files don't necessarily need to be in the same directory or even drive).
It doesn't really matter that they are images or discrete data, managing them occurs using the same process. You need to rely on one or more filegroups each with one or more files in them.
|||Thanks all for your suggestions. The main problem we are having is we have one huge table (300 gigs) reside in one file group and spliting a table across multiple files is not an option for us since we are using SQL2k, (Might have to wait until we upgrade to 2005) but I'm going to use your suggestions with other databases
Thanks
|||I don't understand why not. I am NOT suggesting splitting across multiple fileGROUPS, because that would require an upgrade to 2005. I am suggesting adding FILES to the existing filegroup. This is absolutely supported and possible in SQL Server 2000. The data in that single table would then be written across multiple FILES within a single filegroup and would at least give you options on managing the storage space.|||
Hmm, I obviously didn't think of this possibility. Data could be written across multiple files! it makes perfect sense now. damm, how could I not see this :-(- sorry for misunderstood your comment and thanks for enlightent me, Michael.
Appreciated.
|||Hello Michael Hotek,
I had gone through the post. But did not understood how it will help the issue.
I have one similar problem here. We aer using MS SQL 2000, where there is only one table in the database where scanned document images are storing and database is growing by 2 to 3 GB per day. Currently it is upto 300 GB. For safety reason we are planning to partition/split the data in the table according to one key field 'book_no' so that there will be seperate partitions for each book_no. At the end we need to combine all together to form the original table.
Please guide/suggest me how to split the table with the query like 'SELECT * FROM doc_master WHERE book_no=1' and split the entire table for seperate book_no's and later combine them together to form the original table?
Right now I am planning like this: create demmy databases for each book_no's with same structure as doc_master and use "INSERT...SELECT...WHERE book_no=?' statement to fill these databases and delete them from original database. Whether it will ok with out harming data? Please suggest.|||
Rajeev,
My problem is I can't reduce the size of my image db, I can't archive (at least in SQL 2000) I can't partition my table either and I don't want my data file growing too big. So spliting a single data file of a table into multiple data files (like what Michael suggested) is a way to go. I don't think this way will gain much in performance or anything but having a smaller data files would help me to easily manage files. I also store these smaller data files into separate drives etc. so the sugesstion did help me in the way of managing the physicall data file.
I think what you are trying to do is separate data into separate databases, in my opinion, this could lead into more complication since now you have to manage multiple databases, mutiple db permissions, db maintenances and it might also hurting your application performance etc. in the end you are not gaining much.
|||
You can add data files to a SQL 2000 database, within the same filegroup.
What this will accomplish is that the new data inserted into your table will start spreading to both files. It will not automatically rebalance existing data between files.
Have you experienced specific problems as a result of having a large table, or are you just anticipating that you might?
What is the storage that this database is on?
Looking for suggestion of how to manage a DB file when it getting too big.
Hi all,
At our facility, we have a > 350 gigs database and the DB is breaking into 2 files. One file is our Image table with the size of 300 gigs. The other file contains all of the other tables. The image file is getting bigger and bigger everyday, I am worry that eventually when a data file getting too big it could lead into problem. - DB fragmentatio, hard to manage etc
My question is anyone has run into this problem, and how do you solve it?
I am open for any suggestion.
BTW: I am using SQL 2000.
Thanks in advance.
Use filegroups, Assign the frequently modified tables in one filegroup while those that don't get modified frequently be assigned in another filegroup. When you create filegroups, you also modify your backup strategies.|||Since your basic problem is with growth of your images, you need to come up with a mechanism for managing those.
If this is SQL Server 2005, then I would implement partitioning against that table on your identifier. That will allow you to split the table across multiple filegroups and files. If this is SQL Server 2000, then you can add multiple files to the filegroup that this table resides on (the files don't necessarily need to be in the same directory or even drive).
It doesn't really matter that they are images or discrete data, managing them occurs using the same process. You need to rely on one or more filegroups each with one or more files in them.
|||Thanks all for your suggestions. The main problem we are having is we have one huge table (300 gigs) reside in one file group and spliting a table across multiple files is not an option for us since we are using SQL2k, (Might have to wait until we upgrade to 2005) but I'm going to use your suggestions with other databases
Thanks
|||I don't understand why not. I am NOT suggesting splitting across multiple fileGROUPS, because that would require an upgrade to 2005. I am suggesting adding FILES to the existing filegroup. This is absolutely supported and possible in SQL Server 2000. The data in that single table would then be written across multiple FILES within a single filegroup and would at least give you options on managing the storage space.|||
Hmm, I obviously didn't think of this possibility. Data could be written across multiple files! it makes perfect sense now. damm, how could I not see this :-(- sorry for misunderstood your comment and thanks for enlightent me, Michael.
Appreciated.
|||Hello Michael Hotek,
I had gone through the post. But did not understood how it will help the issue.
I have one similar problem here. We aer using MS SQL 2000, where there is only one table in the database where scanned document images are storing and database is growing by 2 to 3 GB per day. Currently it is upto 300 GB. For safety reason we are planning to partition/split the data in the table according to one key field 'book_no' so that there will be seperate partitions for each book_no. At the end we need to combine all together to form the original table.
Please guide/suggest me how to split the table with the query like 'SELECT * FROM doc_master WHERE book_no=1' and split the entire table for seperate book_no's and later combine them together to form the original table?
Right now I am planning like this: create demmy databases for each book_no's with same structure as doc_master and use "INSERT...SELECT...WHERE book_no=?' statement to fill these databases and delete them from original database. Whether it will ok with out harming data? Please suggest.|||
Rajeev,
My problem is I can't reduce the size of my image db, I can't archive (at least in SQL 2000) I can't partition my table either and I don't want my data file growing too big. So spliting a single data file of a table into multiple data files (like what Michael suggested) is a way to go. I don't think this way will gain much in performance or anything but having a smaller data files would help me to easily manage files. I also store these smaller data files into separate drives etc. so the sugesstion did help me in the way of managing the physicall data file.
I think what you are trying to do is separate data into separate databases, in my opinion, this could lead into more complication since now you have to manage multiple databases, mutiple db permissions, db maintenances and it might also hurting your application performance etc. in the end you are not gaining much.
|||
You can add data files to a SQL 2000 database, within the same filegroup.
What this will accomplish is that the new data inserted into your table will start spreading to both files. It will not automatically rebalance existing data between files.
Have you experienced specific problems as a result of having a large table, or are you just anticipating that you might?
What is the storage that this database is on?