Friday, March 23, 2012
interesting update/insert trigger problem (null issue)
after executing the trigger.
So I add a user Joe Brown with his info on a users table, a trigger fires
and dumps the duplicate data into a users-dup table (for other justifiable
purposes). Update does the same basic thing.
Works fine. Here is the problem. I then add another name. Jack Black and
his info, but he has some null values... like i don't know his address. So
now the trigger fires and all of the duplicate data is carried across to the
dup table... except where there was no data (NULL) in a field... it is
adding the last "real" data set in replace of the null. SO Jack Black has
Joe Brown's address in his field... since it was the last "not null" value
entered in that column.
Looking for ideas. Figured it is something simple, I am just missing. Like
some sort of purge call.
Below is the code I am using:
for insert:
CREATE TRIGGER insertUserMrktg ON [dbo].[USERS]
FOR INSERT
AS
insert into user_marketing (greeting, fName, lName, title, compName,
address, city, provState, fk_country,
zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
emailType, addedBy, fk_userID)
select greeting, fName, lName, title, compName, address, city, provState,
fk_country,
zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
emailType, addedBy, pk_userID
FROM Inserted
for update:
CREATE TRIGGER updateUserMrktg ON [dbo].[USERS]
FOR UPDATE
AS
update a
set a.greeting=b.greeting,
a.fName=b.fName,
a.lName=b.lName,
a.title=b.title,
a.compName=b.compName,
a.address=b.address,
a.city=b.city,
a.provState=b.provState,
a.fk_country=b.fk_country,
a.zip=b.zip,
a.email=b.email,
a.phone=b.phone,
a.phoneext=b.phoneext,
a.fax=b.fax,
a.fk_language=b.fk_language,
a.fk_segment=b.fk_segment,
a.fk_job=b.fk_job,
a.emailType=b.emailType,
a.addedBy=b.addedBy
FROM user_marketing a, users b
where a.fk_userID=
(SELECT pk_userID
FROM Inserted)
Thanks!Two questions.
1 - Why do you want to do this?
2 - How can we identify the last "real" data inserted in the table?, How do
you know it is "real" and not a fake like you are trying to do?
AMB
"cheezebeetle" wrote:
> Ok so here is the issue. I am thinking I somehow have to clear the old da
ta
> after executing the trigger.
> So I add a user Joe Brown with his info on a users table, a trigger fires
> and dumps the duplicate data into a users-dup table (for other justifiable
> purposes). Update does the same basic thing.
> Works fine. Here is the problem. I then add another name. Jack Black an
d
> his info, but he has some null values... like i don't know his address. S
o
> now the trigger fires and all of the duplicate data is carried across to t
he
> dup table... except where there was no data (NULL) in a field... it is
> adding the last "real" data set in replace of the null. SO Jack Black has
> Joe Brown's address in his field... since it was the last "not null" value
> entered in that column.
> Looking for ideas. Figured it is something simple, I am just missing. Li
ke
> some sort of purge call.
> Below is the code I am using:
> for insert:
> CREATE TRIGGER insertUserMrktg ON [dbo].[USERS]
> FOR INSERT
> AS
> insert into user_marketing (greeting, fName, lName, title, compName,
> address, city, provState, fk_country,
> zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
> emailType, addedBy, fk_userID)
> select greeting, fName, lName, title, compName, address, city, provState,
> fk_country,
> zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
> emailType, addedBy, pk_userID
> FROM Inserted
> for update:
> CREATE TRIGGER updateUserMrktg ON [dbo].[USERS]
> FOR UPDATE
> AS
> update a
> set a.greeting=b.greeting,
> a.fName=b.fName,
> a.lName=b.lName,
> a.title=b.title,
> a.compName=b.compName,
> a.address=b.address,
> a.city=b.city,
> a.provState=b.provState,
> a.fk_country=b.fk_country,
> a.zip=b.zip,
> a.email=b.email,
> a.phone=b.phone,
> a.phoneext=b.phoneext,
> a.fax=b.fax,
> a.fk_language=b.fk_language,
> a.fk_segment=b.fk_segment,
> a.fk_job=b.fk_job,
> a.emailType=b.emailType,
> a.addedBy=b.addedBy
> FROM user_marketing a, users b
> where a.fk_userID=
> (SELECT pk_userID
> FROM Inserted)
>
> Thanks!
>|||Hi
Triggers are executed per statement, which can update multiple rows. Using
where a.fk_userID= (SELECT pk_userID FROM Inserted) will return just one
arbitrary value. You are also not relating user_marketing to users
To keep user_marketing in step try:
update a
set a.greeting=b.greeting,
a.fName=b.fName,
a.lName=b.lName,
a.title=b.title,
a.compName=b.compName,
a.address=b.address,
a.city=b.city,
a.provState=b.provState,
a.fk_country=b.fk_country,
a.zip=b.zip,
a.email=b.email,
a.phone=b.phone,
a.phoneext=b.phoneext,
a.fax=b.fax,
a.fk_language=b.fk_language,
a.fk_segment=b.fk_segment,
a.fk_job=b.fk_job,
a.emailType=b.emailType,
a.addedBy=b.addedBy
FROM dbo.user_marketing a
JOIN Inserted b ON b.pk_userID = a.fk_userID
John
"cheezebeetle" wrote:
> Ok so here is the issue. I am thinking I somehow have to clear the old da
ta
> after executing the trigger.
> So I add a user Joe Brown with his info on a users table, a trigger fires
> and dumps the duplicate data into a users-dup table (for other justifiable
> purposes). Update does the same basic thing.
> Works fine. Here is the problem. I then add another name. Jack Black an
d
> his info, but he has some null values... like i don't know his address. S
o
> now the trigger fires and all of the duplicate data is carried across to t
he
> dup table... except where there was no data (NULL) in a field... it is
> adding the last "real" data set in replace of the null. SO Jack Black has
> Joe Brown's address in his field... since it was the last "not null" value
> entered in that column.
> Looking for ideas. Figured it is something simple, I am just missing. Li
ke
> some sort of purge call.
> Below is the code I am using:
> for insert:
> CREATE TRIGGER insertUserMrktg ON [dbo].[USERS]
> FOR INSERT
> AS
> insert into user_marketing (greeting, fName, lName, title, compName,
> address, city, provState, fk_country,
> zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
> emailType, addedBy, fk_userID)
> select greeting, fName, lName, title, compName, address, city, provState,
> fk_country,
> zip, email, phone, phoneext, fax, fk_language, fk_segment, fk_job,
> emailType, addedBy, pk_userID
> FROM Inserted
> for update:
> CREATE TRIGGER updateUserMrktg ON [dbo].[USERS]
> FOR UPDATE
> AS
> update a
> set a.greeting=b.greeting,
> a.fName=b.fName,
> a.lName=b.lName,
> a.title=b.title,
> a.compName=b.compName,
> a.address=b.address,
> a.city=b.city,
> a.provState=b.provState,
> a.fk_country=b.fk_country,
> a.zip=b.zip,
> a.email=b.email,
> a.phone=b.phone,
> a.phoneext=b.phoneext,
> a.fax=b.fax,
> a.fk_language=b.fk_language,
> a.fk_segment=b.fk_segment,
> a.fk_job=b.fk_job,
> a.emailType=b.emailType,
> a.addedBy=b.addedBy
> FROM user_marketing a, users b
> where a.fk_userID=
> (SELECT pk_userID
> FROM Inserted)
>
> Thanks!
>|||Thanks John...
That worked. Duh...
"John Bell" wrote:
> Hi
> Triggers are executed per statement, which can update multiple rows. Using
> where a.fk_userID= (SELECT pk_userID FROM Inserted) will return just one
> arbitrary value. You are also not relating user_marketing to users
> To keep user_marketing in step try:
> update a
> set a.greeting=b.greeting,
> a.fName=b.fName,
> a.lName=b.lName,
> a.title=b.title,
> a.compName=b.compName,
> a.address=b.address,
> a.city=b.city,
> a.provState=b.provState,
> a.fk_country=b.fk_country,
> a.zip=b.zip,
> a.email=b.email,
> a.phone=b.phone,
> a.phoneext=b.phoneext,
> a.fax=b.fax,
> a.fk_language=b.fk_language,
> a.fk_segment=b.fk_segment,
> a.fk_job=b.fk_job,
> a.emailType=b.emailType,
> a.addedBy=b.addedBy
> FROM dbo.user_marketing a
> JOIN Inserted b ON b.pk_userID = a.fk_userID
> John
>
> "cheezebeetle" wrote:
>
Interesting UPDATE STATEMENT for SQL
but not in SQL7.0 it gives the message
Server: Msg 147, Level 16, State 2, Procedure spTEST1, Line 57
An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.
Create Procedure "spTEST1"
As
--Update Processed Flags
UPDATE tblTransaction
SET TransProcessed = 1
FROM dbo.tblTrustGroupTrust
INNER JOIN dbo.tblTrust ON dbo.tblTrustGroupTrust.TgtTrustID = dbo.tblTrust.TrustID
INNER JOIN dbo.tblTransaction ON dbo.tblTrust.TrustID = dbo.tblTransaction.TransTrustID
INNER JOIN dbo.tblShareholder ON dbo.tblTransaction.TransShID = dbo.tblShareholder.ShID
INNER JOIN dbo.tblTransType ON dbo.tblTransType.TransTypeID = dbo.tblTransaction.TransTypeID
WHERE (dbo.tblTrustGroupTrust.TgtTrustGroupID = 3) AND (dbo.tblTransaction.TransProcessed = 0)
AND (dbo.tblShareholder.ShPaymentMethod = 1) AND (dbo.tblShareholder.ShDeceased = 0) AND tblShareholder.ShBankAccount IS NOT NULL
AND EXISTS
(
SELECT dbo.tblTransaction.TransShID
FROM dbo.tblTrustGroupTrust
INNER JOIN dbo.tblTransaction ON dbo.tblTrustGroupTrust.tgtTrustID = dbo.tblTransaction.TransTrustID
WHERE (dbo.tblTransaction.TransProcessed = 0) AND (dbo.tblTrustGroupTrust.TgtTrustGroupID = 3) AND (dbo.tblShareholder.SHID = dbo.tblTransaction.TransShID)
GROUP BY dbo.tblTransaction.TransShID
HAVING SUM(tblTransaction.TransAmt) >= 10
)
When using a straight select on this statement in SQL7.0 it works fine:
SELECT tblTransaction.TransProcessed
FROM .....
Can anyone shed some light on this?The aggregate is on the table being updated - although it is in a subquery this is correllated and so still fails the test.
If you can change the having clause to
HAVING SUM(dbo.tblTransaction.TransAmt) >= 10
so that it is using the copy of the table in the subquery then it should work.|||It might be clearer if you use an alias for the subquery table like
tblTransaction t2.|||Thanks, I very nearly tried to add the missing dbo prefix's in but I presumed that would be too trivial and the error was more complicated than something like that.
Obviously SQL2000 is a little more relaxed on this sort of thing, as it works without error.
Interesting Transactional Replication issue
We have moved from SQL 2000 to SQL 2005 for our main server, and our reporting server, which uses transactional replication.
Now, in SQL 2000 when I originally setup replication, it replicated all of the table indexes.
I have recreated the publications in SQL 2005, but they are no longer there. Do you have any idea what would cause some of our table indexes to be missing?
What can be done to ensure this doesn't happen?
Thank you.
Justin, the default article schema options when creating a transactional publication through the SQL2005 workbench is to not replicate any non-clustered indexes (unique key constraints, clustered index, and primary key are replicated though). Is it possible that the indexes that you are missing at the subscriber are simply non-clustered indexes? Hope that helps.
-Raymond
|||Thanks for the reply. You're correct in that they are non-clustered indexes. How would I modify replication to include these secondary indexes?|||
Hi Justin,
You can use sp_changearticle to enable the NonClusteredIndexes (0x40) schema option, or you can change the 'Copy non-clustered indexes' option to true on the article property sheet (right-click publication node->properties->select Articles on left plane->Click Article Properties button.
Hope that helps,
-Raymond
Interesting subscription question...I could use some brainstorming help
Each tehnician handles a different US region so the report will be
driven via a parameter (most likely a technician ID). We don't want
other technicians seeing data for someone elses region. This all has
to be automated, no user interaction with this report other than each
technician getting an email with his/her data (based on his/her
results).
What would be the best way to handle this with a subscription?I do know that this will have to be a data-driven subscription. Is it
possible for the value in a data driven query be used as a parameter in
the report'
kimcheebowl wrote:
> I have a report that needs to be emailed to 40 diffferent technicians.
> Each tehnician handles a different US region so the report will be
> driven via a parameter (most likely a technician ID). We don't want
> other technicians seeing data for someone elses region. This all has
> to be automated, no user interaction with this report other than each
> technician getting an email with his/her data (based on his/her
> results).
> What would be the best way to handle this with a subscription?|||Absolutely - DDS will be just what the doctor ordered. You will need
to feed the parameter values into a table for the DDS to pull from. I
use both Access DB and Excel spreadsheet uploads to upload subscription
parameters into a SQL Server table, which is then used by the DDS.
Matt A
kimcheebowl wrote:
> I do know that this will have to be a data-driven subscription. Is it
> possible for the value in a data driven query be used as a parameter in
> the report'
>
> kimcheebowl wrote:
> > I have a report that needs to be emailed to 40 diffferent technicians.
> > Each tehnician handles a different US region so the report will be
> > driven via a parameter (most likely a technician ID). We don't want
> > other technicians seeing data for someone elses region. This all has
> > to be automated, no user interaction with this report other than each
> > technician getting an email with his/her data (based on his/her
> > results).
> >
> > What would be the best way to handle this with a subscription?sql
Interesting SQLProblem using joins
I have spent hours on this problem and no research has turned in my favour. Does anyone have any examples they can put forward for me.
I am really desperate to sort this out, and any help will be greatly appreciated.
Thanks
ShaunAre you meaning col1.value + col2.value from table1 is the equal to colx.value of table2
P
Interesting SQL...
A developer just finished complaining about the performance of one of our databases. Well, he sent me the query and I couldn't understand why it was such a dog. Anyways I rewrote it. The execution plan is totally different between the two. I had no idea specifying the join made such a difference. First sql executed in 7 minutes that 2nd took 1 second.
SELECT
dbo.contract_co.producer_num_id, contract_co_status
FROM
dbo.contract_co,
dbo.v_contract_co_status
WHERE ( dbo.v_contract_co_status.contract_co_id = dbo.contract_co.contract_co_id )
AND contract_co_status = 'Pending'
OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )
SELECT
dbo.contract_co.producer_num_id, contract_co_status
FROM dbo.contract_co
INNER JOIN dbo.v_contract_co_status
ON dbo.contract_co.contract_co_id = dbo.v_contract_co_status.contract_co_id
WHERE contract_co_status = 'Pending'
OR ( contract_co_status = 'Active' and effective_date > '1/1/2004' )The two queries look like they will give different results, too. The first one appears to include a cartesian join. The OR in the where clause makes all the difference.|||Don't want to sound like a snob, but it's all due to the order of processing by QP:
1. JOIN
2. GROUP
3. WHERE
4. HAVING
By rewriting the old query you filtered out what the first query had to deal with while still trying to JOIN.|||actually, i believe it's
1. JOIN
2. WHERE
3. GROUP
4. HAVING
Interesting SQL Server behaviour
The following code describes a problem I'm having when I'm selecting
data via a Case statement.
The data I'm selecting is from a table with 2 varchar columns and 1
int column. When I select this data I get the following error:
Server: Msg245, Level 16,State 1, Line 1
Syntax error converting the varchar values 'A' to a column of datatype
int.
If I cast the int to a varchar it works, but I would prefer not to
cast if possible. SQL server seems to look at all the datatypes in the
case statement and if a int data type appears, it seems to be trying
to insert into a int column. Is this some sort of temp table? Anyone
know why it assigned an int column here?
Any ideas what I can do
Thanks,
Tom
- --
- -- Set up the table
- --
create table Tom
(charcol1 varchar(2)
,charcol2 varchar(2)
,intcol1 int
)
insert into tom
(charcol1
,intcol1
,charcol2
)
values
('A',1,'C')
- ---
- -- Code
- ---
SELECT
CASE numlist.list
WHEN 1 THEN
Tom.charcol1
WHEN 2 THEN
--cast (Tom.intcol1 as varchar(5))
Tom.intcol1
WHEN 3 THEN
Tom.charcol2
END AS result
FROM Tom
CROSS JOIN (select 1 as list
union all
select 2 as list
union all
select 3 as list
) as numlistThis is how CASE work, result from CASE expression is of the same datatype.
Output is determined as
per "datatype precedence" documented in Books Online.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Staggly" <tomstagg@.gmail.com> wrote in message
news:e646e15f.0502090622.8a26408@.posting.google.com...
> Hi,
> The following code describes a problem I'm having when I'm selecting
> data via a Case statement.
> The data I'm selecting is from a table with 2 varchar columns and 1
> int column. When I select this data I get the following error:
> Server: Msg245, Level 16,State 1, Line 1
> Syntax error converting the varchar values 'A' to a column of datatype
> int.
> If I cast the int to a varchar it works, but I would prefer not to
> cast if possible. SQL server seems to look at all the datatypes in the
> case statement and if a int data type appears, it seems to be trying
> to insert into a int column. Is this some sort of temp table? Anyone
> know why it assigned an int column here?
> Any ideas what I can do
> Thanks,
> Tom
> - --
> - -- Set up the table
> - --
> create table Tom
> (charcol1 varchar(2)
> ,charcol2 varchar(2)
> ,intcol1 int
> )
> insert into tom
> (charcol1
> ,intcol1
> ,charcol2
> )
> values
> ('A',1,'C')
> - ---
> - -- Code
> - ---
> SELECT
> CASE numlist.list
> WHEN 1 THEN
> Tom.charcol1
> WHEN 2 THEN
> --cast (Tom.intcol1 as varchar(5))
> Tom.intcol1
> WHEN 3 THEN
> Tom.charcol2
> END AS result
> FROM Tom
> CROSS JOIN (select 1 as list
> union all
> select 2 as list
> union all
> select 3 as list
> ) as numlist
Interesting SQL query requirement for <SELECT> menu
Wondered if you could help me with the below query.
I have 1 simple table called STOCKCATS that consists of 2 fields.
These fields are called CATID and LEVEL.
The contents of this table are as follows:
CATID LEVEL
cat01 <nothing>
cat02 <nothing>
cat03 cat01
cat04 <nothing>
cat05 cat01
cat06 cat02
cat07 cat04
etc.. etc...
The way this table works is that I have an ASP page that allows the user to
create a stock category at 2 levels, category level and sub-category level.
When I file the entered data into the table, if the user has chosen to
create a category level stock category then the LEVEL field is left blank
and if they chose to create a sub-category level category then I post the
relevant category level stock category code in the LEVEL field. For
example, in the above list cat01 is a category level stock category and
cat05 is a sub-category as it is a sub-category of cat01.
My query is that I want to populate a simple HTML <SELECT> menu (using ASP),
but instead of it being a straightforward 'select catid from stockcats order
by catid', I want to group this list into some kind of order, eg:
instead of:
cat01 <nothing> << I need to bring back this 2nd column so that I can
do a simple IF THEN in asp to indent sub-cats
cat02 <nothing>
cat03 cat01
cat04 <nothing>
cat05 cat01
cat06 cat02
cat07 cat04
I would like
cat01 <nothing> << ditto
cat03 cat01
cat05 cat01
cat02 <nothing>
cat06 cat02
cat04 <nothing>
cat07 cat04
Do you know if this is possible in pure SQL (I must confess that I'm using
MySQL, but I would have thought the SQL syntax would be the same if it is
possible) or a combo of ASP & SQL?
Thanks
RobbieOn Mon, 7 Nov 2005 14:45:41 -0000, Astra wrote:
>Hi All
>Wondered if you could help me with the below query.
>I have 1 simple table called STOCKCATS that consists of 2 fields.
>These fields are called CATID and LEVEL.
>The contents of this table are as follows:
>CATID LEVEL
>cat01 <nothing>
>cat02 <nothing>
>cat03 cat01
>cat04 <nothing>
>cat05 cat01
>cat06 cat02
>cat07 cat04
>etc.. etc...
>The way this table works is that I have an ASP page that allows the user to
>create a stock category at 2 levels, category level and sub-category level.
>When I file the entered data into the table, if the user has chosen to
>create a category level stock category then the LEVEL field is left blank
>and if they chose to create a sub-category level category then I post the
>relevant category level stock category code in the LEVEL field. For
>example, in the above list cat01 is a category level stock category and
>cat05 is a sub-category as it is a sub-category of cat01.
Hi Robbie,
I'm not too happy with this design. Categories are not the same thing as
sub-categories, so you shouldn't lump them together in the same table.
CREATE TABLE Categories
(CatName varchar(10) NOT NULL,
PRIMARY KEY (CatName)
)
CREATE TABLE SubCategories
(SubCatName varchar(10) NOT NULL,
CatName varchar(10) NOT NULL,
PRIMARY KEY (SubCatName),
FOREIGN KEY (CatName) REFERENCES Categories (CatName)
)
>My query is that I want to populate a simple HTML <SELECT> menu (using ASP),
>but instead of it being a straightforward 'select catid from stockcats order
>by catid', I want to group this list into some kind of order, eg:
>instead of:
>cat01 <nothing> << I need to bring back this 2nd column so that I can
>do a simple IF THEN in asp to indent sub-cats
>cat02 <nothing>
>cat03 cat01
>cat04 <nothing>
>cat05 cat01
>cat06 cat02
>cat07 cat04
>I would like
>cat01 <nothing> << ditto
>cat03 cat01
>cat05 cat01
>cat02 <nothing>
>cat06 cat02
>cat04 <nothing>
>cat07 cat04
>Do you know if this is possible in pure SQL (I must confess that I'm using
>MySQL, but I would have thought the SQL syntax would be the same if it is
>possible) or a combo of ASP & SQL?
If you change the design as I suggest, then it's as simple as
SELECT CatName, NULL AS SubCatName
FROM Categories
UNION ALL
SELECT CatName, SubCatName
FROM SubCategories
ORDER BY CatName, SubCatName
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Astra (No@.Spam.com) writes:
> I would like
> cat01 <nothing> << ditto
> cat03 cat01
> cat05 cat01
> cat02 <nothing>
> cat06 cat02
> cat04 <nothing>
> cat07 cat04
> Do you know if this is possible in pure SQL (I must confess that I'm using
> MySQL, but I would have thought the SQL syntax would be the same if it is
> possible) or a combo of ASP & SQL?
I believe this query would work in SQL Server:
SELECT CATID, LEVEL
FROM STOCKCATS
ORDER BY coalesce(LEVEL, CATID), LEVEL
But I don't think it this conforms to ANSI standards, so it may not run
in MySQL.
If you want help with MySQL, you are probably better off asking in
comp.databases.mysql or some other MySQL forum.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Many thanks guys
Apologies for multi-post.
Rgds Robbie
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns9707F156E1626Yazorman@.127.0.0.1...
Astra (No@.Spam.com) writes:
> I would like
> cat01 <nothing> << ditto
> cat03 cat01
> cat05 cat01
> cat02 <nothing>
> cat06 cat02
> cat04 <nothing>
> cat07 cat04
> Do you know if this is possible in pure SQL (I must confess that I'm using
> MySQL, but I would have thought the SQL syntax would be the same if it is
> possible) or a combo of ASP & SQL?
I believe this query would work in SQL Server:
SELECT CATID, LEVEL
FROM STOCKCATS
ORDER BY coalesce(LEVEL, CATID), LEVEL
But I don't think it this conforms to ANSI standards, so it may not run
in MySQL.
If you want help with MySQL, you are probably better off asking in
comp.databases.mysql or some other MySQL forum.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
Interesting SQL query requirement for <SELECT> menu
Wondered if you could help me with the below query.
I have 1 simple table called STOCKCATS that consists of 2 fields.
These fields are called CATID and LEVEL.
The contents of this table are as follows:
CATID LEVEL
cat01 <nothing>
cat02 <nothing>
cat03 cat01
cat04 <nothing>
cat05 cat01
cat06 cat02
cat07 cat04
etc.. etc...
The way this table works is that I have an ASP page that allows the user to
create a stock category at 2 levels, category level and sub-category level.
When I file the entered data into the table, if the user has chosen to
create a category level stock category then the LEVEL field is left blank
and if they chose to create a sub-category level category then I post the
relevant category level stock category code in the LEVEL field. For
example, in the above list cat01 is a category level stock category and
cat05 is a sub-category as it is a sub-category of cat01.
My query is that I want to populate a simple HTML <SELECT> menu (using ASP),
but instead of it being a straightforward 'select catid from stockcats order
by catid', I want to group this list into some kind of order, eg:
instead of:
cat01 <nothing> << I need to bring back this 2nd column so that I can
do a simple IF THEN in asp to indent sub-cats
cat02 <nothing>
cat03 cat01
cat04 <nothing>
cat05 cat01
cat06 cat02
cat07 cat04
I would like
cat01 <nothing> << ditto
cat03 cat01
cat05 cat01
cat02 <nothing>
cat06 cat02
cat04 <nothing>
cat07 cat04
Do you know if this is possible in pure SQL (I must confess that I'm using
MySQL, but I would have thought the SQL syntax would be the same if it is
possible) or a combo of ASP & SQL?
Thanks
Robbiecreate table #tbl (c1 int, c2 int)
insert into #tbl
select 1 ,null
union select 3,1
union select 5,1
union select 2,null
union select 6,2
union select 4,null
union select 7,4
select * from #tbl
order by isnull(c2,c1),c1
"Astra" <No@.Spam.com> wrote in message
news:ONC3ao64FHA.3976@.TK2MSFTNGP15.phx.gbl...
> Hi All
> Wondered if you could help me with the below query.
> I have 1 simple table called STOCKCATS that consists of 2 fields.
> These fields are called CATID and LEVEL.
> The contents of this table are as follows:
> CATID LEVEL
> cat01 <nothing>
> cat02 <nothing>
> cat03 cat01
> cat04 <nothing>
> cat05 cat01
> cat06 cat02
> cat07 cat04
> etc.. etc...
> The way this table works is that I have an ASP page that allows the user
to
> create a stock category at 2 levels, category level and sub-category
level.
> When I file the entered data into the table, if the user has chosen to
> create a category level stock category then the LEVEL field is left blank
> and if they chose to create a sub-category level category then I post the
> relevant category level stock category code in the LEVEL field. For
> example, in the above list cat01 is a category level stock category and
> cat05 is a sub-category as it is a sub-category of cat01.
> My query is that I want to populate a simple HTML <SELECT> menu (using
ASP),
> but instead of it being a straightforward 'select catid from stockcats
order
> by catid', I want to group this list into some kind of order, eg:
> instead of:
> cat01 <nothing> << I need to bring back this 2nd column so that I
can
> do a simple IF THEN in asp to indent sub-cats
> cat02 <nothing>
> cat03 cat01
> cat04 <nothing>
> cat05 cat01
> cat06 cat02
> cat07 cat04
> I would like
> cat01 <nothing> << ditto
> cat03 cat01
> cat05 cat01
> cat02 <nothing>
> cat06 cat02
> cat04 <nothing>
> cat07 cat04
> Do you know if this is possible in pure SQL (I must confess that I'm using
> MySQL, but I would have thought the SQL syntax would be the same if it is
> possible) or a combo of ASP & SQL?
> Thanks
> Robbie
>
>|||Hi Moshe
Many thanks for your prompt reply, but how this would work?
Where does this relate to my STOCKCATS table'
Rdgs Robbie
<Moshe> wrote in message news:%23xbjsy64FHA.4076@.tk2msftngp13.phx.gbl...
create table #tbl (c1 int, c2 int)
insert into #tbl
select 1 ,null
union select 3,1
union select 5,1
union select 2,null
union select 6,2
union select 4,null
union select 7,4
select * from #tbl
order by isnull(c2,c1),c1
"Astra" <No@.Spam.com> wrote in message
news:ONC3ao64FHA.3976@.TK2MSFTNGP15.phx.gbl...
> Hi All
> Wondered if you could help me with the below query.
> I have 1 simple table called STOCKCATS that consists of 2 fields.
> These fields are called CATID and LEVEL.
> The contents of this table are as follows:
> CATID LEVEL
> cat01 <nothing>
> cat02 <nothing>
> cat03 cat01
> cat04 <nothing>
> cat05 cat01
> cat06 cat02
> cat07 cat04
> etc.. etc...
> The way this table works is that I have an ASP page that allows the user
to
> create a stock category at 2 levels, category level and sub-category
level.
> When I file the entered data into the table, if the user has chosen to
> create a category level stock category then the LEVEL field is left blank
> and if they chose to create a sub-category level category then I post the
> relevant category level stock category code in the LEVEL field. For
> example, in the above list cat01 is a category level stock category and
> cat05 is a sub-category as it is a sub-category of cat01.
> My query is that I want to populate a simple HTML <SELECT> menu (using
ASP),
> but instead of it being a straightforward 'select catid from stockcats
order
> by catid', I want to group this list into some kind of order, eg:
> instead of:
> cat01 <nothing> << I need to bring back this 2nd column so that I
can
> do a simple IF THEN in asp to indent sub-cats
> cat02 <nothing>
> cat03 cat01
> cat04 <nothing>
> cat05 cat01
> cat06 cat02
> cat07 cat04
> I would like
> cat01 <nothing> << ditto
> cat03 cat01
> cat05 cat01
> cat02 <nothing>
> cat06 cat02
> cat04 <nothing>
> cat07 cat04
> Do you know if this is possible in pure SQL (I must confess that I'm using
> MySQL, but I would have thought the SQL syntax would be the same if it is
> possible) or a combo of ASP & SQL?
> Thanks
> Robbie
>
>|||instead
select * from #tbl
order by isnull(c2,c1),c1
I would use
select * from #tbl
order by isnull(c2,c1), c2
to be sure main level is always the first and sub items are below.
Little problem, if you need sort items alphabetically. Like:
Alpha
C...
R...
Beta
F...
R...
G...
select A.CATID, A.LEVEL, A.TEXT, (select B.TEXT from STOCKCATS B where
B.CATID=A.LEVEL) ParentText
from STOCKCATS A
order by ISNULL(ParentText, Text), c2, Text
but sorting by using ISNULL( varchar, ...) will be very slow. I think the
best way is separate it into 2 SELECTS. One for root items, and other one fo
r
getting subitems for each root item.|||On Mon, 7 Nov 2005 14:45:04 -0000, Astra wrote:
>Hi All
>Wondered if you could help me with the below query.
Hi Robbie,
I answered the exact same question in comp.databases.ms-sqlserver about
an hour ago.
Please do not post the same message independantly to several groups.
Pick one group; post there. Or, if you really think that the message is
appropriate for two groups, post ONE SINGLE message and crosspost it to
both groups. That way, all answers will appear in both groups as well,
which saves us the time to answer a message that already has been
replied to in another group.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Many thanks guys.
Apologies for the multi-post.
Rgds Robbie
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:21qvm1p9qfsems6v8b58m3m1e2ir0dfp47@.
4ax.com...
On Mon, 7 Nov 2005 14:45:04 -0000, Astra wrote:
>Hi All
>Wondered if you could help me with the below query.
Hi Robbie,
I answered the exact same question in comp.databases.ms-sqlserver about
an hour ago.
Please do not post the same message independantly to several groups.
Pick one group; post there. Or, if you really think that the message is
appropriate for two groups, post ONE SINGLE message and crosspost it to
both groups. That way, all answers will appear in both groups as well,
which saves us the time to answer a message that already has been
replied to in another group.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)sql
Interesting SQL problem : How to track movement history
There's an interesting SQL problem I've come across that I'm currently
banging my head against. Given the following table that contains item
location information populated every minute :
location_id date_created
=========== ============
5 2000-01-01 01:00 <-- Don't need
5 2000-01-01 01:01 <-- Don't need
5 2000-01-01 01:02 <-- Need
7 2000-01-01 01:03 <-- Don't need
7 2000-01-01 01:04 <-- Need
5 2000-01-01 01:05 <-- Need
2 2000-01-01 01:06 <-- Don't Need
2 2000-01-01 01:07 <-- Need
7 2000-01-01 01:08 <-- Need
how would you generate a result-set that returns the item's location
history *without* duplicating the same location if the item has been
sitting in the same room for a while. For example, the result set
should look like the following :
location_id date_created
=========== ============
5 2000-01-01 01:02
7 2000-01-01 01:04
5 2000-01-01 01:05
2 2000-01-01 01:07
7 2000-01-01 01:08
This is turning out to be a finger twister and I'm not sure if it
could be done in SQL; I may have to resort to writing a stored-proc.
Regards,
AnthonyIf you are using SQL Server 2005, you can do this
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)|||On 20.04.2007 15:49, markc600@.hotmail.com wrote:
Quote:
Originally Posted by
If you are using SQL Server 2005, you can do this
>
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)
I'd do
select location_id, max(date_created) date_created
from your_table
group by location_id
order by max(date_created)
Am I missing something?
robert|||Hello Robert,
That wouldn't work since it groups by location_id and would return the
following :
location_id date_created
=========== ============
5 2000-01-01 01:05
2 2000-01-01 01:07
7 2000-01-01 01:08
I am looking into Mark's solution as we speak...
Regards,
Anthony
On Apr 20, 9:59 am, Robert Klemme <shortcut...@.googlemail.comwrote:
Quote:
Originally Posted by
On 20.04.2007 15:49, markc...@.hotmail.com wrote:
>
Quote:
Originally Posted by
If you are using SQL Server 2005, you can do this
>
Quote:
Originally Posted by
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)
>
I'd do
>
select location_id, max(date_created) date_created
from your_table
group by location_id
order by max(date_created)
>
Am I missing something?
>
robert|||Yep. What if the item returns to a location that it once occupied? Your
solution would not pick up the previous history.
--
Tom
----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Robert Klemme" <shortcutter@.googlemail.comwrote in message
news:58rv9iF2ikhoqU1@.mid.individual.net...
On 20.04.2007 15:49, markc600@.hotmail.com wrote:
Quote:
Originally Posted by
If you are using SQL Server 2005, you can do this
>
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)
I'd do
select location_id, max(date_created) date_created
from your_table
group by location_id
order by max(date_created)
Am I missing something?
robert|||On 20.04.2007 16:08, Tom Moreau wrote:
Quote:
Originally Posted by
Yep. What if the item returns to a location that it once occupied? Your
solution would not pick up the previous history.
Right, thanks Tom and Anthony for correcting me. Stupid me. I should
have spent few more CPU cycles on this. :-)
robert|||Hello Mark,
So fast on the trigger! What do you do, have a notification bell ring
whenever something is posted on this newsgroup? :P
Your solution worked perfectly... I'm used to regular ol' SQL and what
you've just shown me is amazing; I didn't know SQL had these
capabilities. rank, over and partition are new to me. Do you have any
book recommendations?
Regards,
Anthony
On Apr 20, 9:49 am, markc...@.hotmail.com wrote:
Quote:
Originally Posted by
If you are using SQL Server 2005, you can do this
>
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)
--
Tom
----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"Anthony Paul" <anthonypaulo@.gmail.comwrote in message
news:1177078960.725463.69600@.y80g2000hsf.googlegro ups.com...
Hello Mark,
So fast on the trigger! What do you do, have a notification bell ring
whenever something is posted on this newsgroup? :P
Your solution worked perfectly... I'm used to regular ol' SQL and what
you've just shown me is amazing; I didn't know SQL had these
capabilities. rank, over and partition are new to me. Do you have any
book recommendations?
Regards,
Anthony
On Apr 20, 9:49 am, markc...@.hotmail.com wrote:
Quote:
Originally Posted by
If you are using SQL Server 2005, you can do this
>
with cte(location_id,date_created,grp)
as (
select location_id,
date_created,
rank() over(partition by location_id order by date_created)
- rank() over(order by date_created)
from mytable)
select location_id,
max(date_created) as date_created
from cte
group by location_id,grp
order by max(date_created)
"Anthony Paul" <anthonypaulo@.gmail.comwrote in message
news:1177078960.725463.69600@.y80g2000hsf.googlegro ups.com...
Quote:
Originally Posted by
Hello Mark,
>
So fast on the trigger! What do you do, have a notification bell ring
whenever something is posted on this newsgroup? :P
>
I can't speak for Mark, but some of us have way too much time on our hands.
;-)
Quote:
Originally Posted by
Your solution worked perfectly... I'm used to regular ol' SQL and what
you've just shown me is amazing; I didn't know SQL had these
capabilities. rank, over and partition are new to me. Do you have any
book recommendations?
Look for books by Itzak Ben-Gan. (Inside Microsoft SQL Server 2005; T-SQL
Querying covers this. it's a island/gap problem.)
I remember the first time I saw him demo those functions (they're new in SQL
2005).
Some pretty incredible stuff.
Quote:
Originally Posted by
>
Regards,
>
Anthony
>
On Apr 20, 9:49 am, markc...@.hotmail.com wrote:
Quote:
Originally Posted by
>If you are using SQL Server 2005, you can do this
>>
>with cte(location_id,date_created,grp)
>as (
>select location_id,
> date_created,
> rank() over(partition by location_id order by date_created)
> - rank() over(order by date_created)
>from mytable)
>select location_id,
> max(date_created) as date_created
>from cte
>group by location_id,grp
>order by max(date_created)
>
>
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html|||The usual design error is to have only one time in a row to capture
when an event started, then do horrible self-joins to get the duration
of the status change. Let me use a history table for price changes.
The fact to store is that a price had a duration:
CREATE TABLE PriceHistory
(upc CHAR(13) NOT NULL
REFERENCES Inventory(upc),
start_date DATE NOT NULL,
end_date DATE, -- null means current
CHECK(start_date < end_date),
PRIMARY KEY (upc, start_date),
item_price DECIMAL (12,4) NOT NULL
CHECK (item_price 0.0000),
etc.);
You actually needs more checks to assure that the start date is at
00:00 and the end dates is at 23:59:59.999 Hrs. You then use a
BETWEEN predicate to get the appropriate price.
SELECT ..
FROM PriceHistory AS H, Orders AS O
WHERE O.sales_date BETWEEN H.start_date
AND COALESCE (end_date, CURRENT_TIMESTAMP);
It is also a good idea to have a VIEW with the current data:
CREATE VIEW CurrentPrices (..)
AS
SELECT ..
FROM PriceHistory
WHERE end_date IS NULL;
Look up the Rick Snodgrass book on Temporal Queries in SQL at the
University of Arizona website; it is a free download.
interesting sql login issue
I have a SQL 2005 server running on SERVERA. I have the management studio
installed on my desktop (Windows XP Pro SP2) and connect to the sql server
using my regular domain login account (windows auth). This worked fine until
I had to change my domain user account's password as it had expired (we have
to change it every 90 days). Now, ever since then, I have been unable to
connect to the sql server in management studio. I get:-
Login Failed for user ''. Error: 18452
If I remote into the server and try to connect to sql it works fine (I am
member of local admin group on the server)
Any ideas'http://support.microsoft.com/kb/555332
Is it possible that you have your Windows auth password hardcoded somewhere?
When worse comes to worse, you may have to drop and re-create your login on
a
SQL Server.
"param@.community.nospam" wrote:
> Hi all,
> I have a SQL 2005 server running on SERVERA. I have the management studio
> installed on my desktop (Windows XP Pro SP2) and connect to the sql server
> using my regular domain login account (windows auth). This worked fine unt
il
> I had to change my domain user account's password as it had expired (we ha
ve
> to change it every 90 days). Now, ever since then, I have been unable to
> connect to the sql server in management studio. I get:-
> Login Failed for user ''. Error: 18452
> If I remote into the server and try to connect to sql it works fine (I am
> member of local admin group on the server)
> Any ideas'
>
>|||Can you connect from the command line with sqlcmd?
It may also be worth a try of deleting the registration and
reregistering the server.
-Sue
On Wed, 21 Mar 2007 14:50:49 -0500, <param@.community.nospam>
wrote:
>Hi all,
>I have a SQL 2005 server running on SERVERA. I have the management studio
>installed on my desktop (Windows XP Pro SP2) and connect to the sql server
>using my regular domain login account (windows auth). This worked fine unti
l
>I had to change my domain user account's password as it had expired (we hav
e
>to change it every 90 days). Now, ever since then, I have been unable to
>connect to the sql server in management studio. I get:-
>Login Failed for user ''. Error: 18452
>If I remote into the server and try to connect to sql it works fine (I am
>member of local admin group on the server)
>Any ideas'
>|||What do you mean by hardcoded? I just login to my workstation using my
windows domain login and password?
I have tried dropping and re-creating the login in SQL Server.
"Leon Shargorodsky" <LeonShargorodsky@.discussions.microsoft.com> wrote in
message news:D1639F71-646C-4286-983F-87AF7C25360A@.microsoft.com...[vbcol=seagreen]
> http://support.microsoft.com/kb/555332
> Is it possible that you have your Windows auth password hardcoded
> somewhere?
> When worse comes to worse, you may have to drop and re-create your login
> on a
> SQL Server.
> "param@.community.nospam" wrote:
>|||If you notice the username is empty in the error message below. Could it be
a bug where management studio is not passing credentials to the server?
"Leon Shargorodsky" <LeonShargorodsky@.discussions.microsoft.com> wrote in
message news:D1639F71-646C-4286-983F-87AF7C25360A@.microsoft.com...[vbcol=seagreen]
> http://support.microsoft.com/kb/555332
> Is it possible that you have your Windows auth password hardcoded
> somewhere?
> When worse comes to worse, you may have to drop and re-create your login
> on a
> SQL Server.
> "param@.community.nospam" wrote:
>
Interesting SQL code editor
Fast SQL Editor
A new and powerfull SQL editor for the professional user.
www.activefrog.com
We are still looking for Beta Testers of this new product. Please visit the
web site and download this amazing tool.
The activeFrog.com team
"activefrog.com" <mail@.activefrog.com> schrieb im Newsbeitrag
news:%23YStFAwIFHA.1096@.tk2msftngp13.phx.gbl...
> Please check out:
> Fast SQL Editor
> A new and powerfull SQL editor for the professional user.
> www.activefrog.com
> We are still looking for Beta Testers of this new product. Please visit
the
> web site and download this amazing tool.
> The activeFrog.com team
Although this question should be forbidden in this group... :-) Do you
plan to go cross platform, i.e. support other SQL dialects as well?
Kind regards
robert
|||One question I have is why develop a new sql editor considering that the
market is flooded anyway?
activefrog.com wrote:
> Please check out:
> Fast SQL Editor
> A new and powerfull SQL editor for the professional user.
> www.activefrog.com
> We are still looking for Beta Testers of this new product. Please visit the
> web site and download this amazing tool.
> The activeFrog.com team
>
>
Interesting SELECT question
Does ANSI SQL support conditional selects ?
I have a table with say 15 fields
tblTale1
ID
F1
F2
F3
F4
F5
F6
..
..
..
F15
What I want to do is, add all the Fields from F1 to F15 and divide that
total by the total number of fields which were more than 0.
I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
number of fields which is more than 0 ?
Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still doesn't
give me the # of feilds for that one record which is greater than zero...
I can't think of any solution except to put these fields in another table.
Let me know if i'm missing something.
Thanks
I don't understand you. Can you post DDL+ sample data + expected result?
Did you mean ROWS not FIELDS?
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
|||Your table design is very questionable but assuming no NULLs and no
negatives, try this:
SELECT
(f1 + f2 + f3 + f4 + f5 + f6 + f7 + f8 + f9
+ f10 + f11 + f12 + f13 + f14 + f15)
/
(SIGN(f1) + SIGN(f2) + SIGN(f3) + SIGN(f4) + SIGN(f5)
+ SIGN(f6) + SIGN(f7) + SIGN(f8) + SIGN(f9) + SIGN(f10)
+ SIGN(f11) + SIGN(f12) + SIGN(f13) + SIGN(f14) + SIGN(f15))
FROM Table1 ;
If you have to cope with negatives then change SIGN(x) to ABS(SIGN(x)).
In general for conditional execution take a look at the CASE
expression.
David Portas
SQL Server MVP
|||you could use case
select (f1 + f2 + f3 + f4) / case when f1 = 0 and f2 = 0 and f3 = 0 and f4 =
0 then 1 else case when f1 = 0 then 0 else 1 end + case when f2 = 0 then 0
else 1 end + case when f3 = 0 then 0 else 1 end + case when f4 = 0 then 0
else 1 end end from tableName
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
|||Here are a couple of options:
select id,
(f1*s1 + f2*s2 + f3*s3 + ... + f15*s15)
/ (s1 + s2 + s3 + ... + s15) as avgpos
from (select *,
1+sign(sign(f1) -1) as s1,
1+sign(sign(f2) -1) as s2,
1+sign(sign(f3) -1) as s3,
...
1+sign(sign(f15)-1) as s15
from (select id,
isnull(f1, 0) as f1,
isnull(f2, 0) as f2,
isnull(f3, 0) as f3,
...
isnull(f15, 0) as f15
from t1) as d1) as d2
select id, avg(val) as avgpos
from (select id, n,
case n
when 1 then f1
when 2 then f2
when 3 then f3
...
when 15 then f15
end as val
from t1,
(select 1 as n
union all select 2
union all select 3
...
union all select 15) as nums) as d
where val > 0
group by id
BG, SQL Server MVP
www.SolidQualityLearning.com
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
> doesn't give me the # of feilds for that one record which is greater than
> zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
Interesting SELECT question
Does ANSI SQL support conditional selects ?
I have a table with say 15 fields
tblTale1
ID
F1
F2
F3
F4
F5
F6
..
..
..
F15
What I want to do is, add all the Fields from F1 to F15 and divide that
total by the total number of fields which were more than 0.
I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
number of fields which is more than 0 ?
Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still doesn't
give me the # of feilds for that one record which is greater than zero...
I can't think of any solution except to put these fields in another table.
Let me know if i'm missing something.
Thanks
I don't understand you. Can you post DDL+ sample data + expected result?
Did you mean ROWS not FIELDS?
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
|||Your table design is very questionable but assuming no NULLs and no
negatives, try this:
SELECT
(f1 + f2 + f3 + f4 + f5 + f6 + f7 + f8 + f9
+ f10 + f11 + f12 + f13 + f14 + f15)
/
(SIGN(f1) + SIGN(f2) + SIGN(f3) + SIGN(f4) + SIGN(f5)
+ SIGN(f6) + SIGN(f7) + SIGN(f8) + SIGN(f9) + SIGN(f10)
+ SIGN(f11) + SIGN(f12) + SIGN(f13) + SIGN(f14) + SIGN(f15))
FROM Table1 ;
If you have to cope with negatives then change SIGN(x) to ABS(SIGN(x)).
In general for conditional execution take a look at the CASE
expression.
David Portas
SQL Server MVP
|||you could use case
select (f1 + f2 + f3 + f4) / case when f1 = 0 and f2 = 0 and f3 = 0 and f4 =
0 then 1 else case when f1 = 0 then 0 else 1 end + case when f2 = 0 then 0
else 1 end + case when f3 = 0 then 0 else 1 end + case when f4 = 0 then 0
else 1 end end from tableName
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
|||Here are a couple of options:
select id,
(f1*s1 + f2*s2 + f3*s3 + ... + f15*s15)
/ (s1 + s2 + s3 + ... + s15) as avgpos
from (select *,
1+sign(sign(f1) -1) as s1,
1+sign(sign(f2) -1) as s2,
1+sign(sign(f3) -1) as s3,
...
1+sign(sign(f15)-1) as s15
from (select id,
isnull(f1, 0) as f1,
isnull(f2, 0) as f2,
isnull(f3, 0) as f3,
...
isnull(f15, 0) as f15
from t1) as d1) as d2
select id, avg(val) as avgpos
from (select id, n,
case n
when 1 then f1
when 2 then f2
when 3 then f3
...
when 15 then f15
end as val
from t1,
(select 1 as n
union all select 2
union all select 3
...
union all select 15) as nums) as d
where val > 0
group by id
BG, SQL Server MVP
www.SolidQualityLearning.com
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
> doesn't give me the # of feilds for that one record which is greater than
> zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
sql
Interesting SELECT question
Does ANSI SQL support conditional selects ?
I have a table with say 15 fields
tblTale1
ID
F1
F2
F3
F4
F5
F6
.
.
.
F15
What I want to do is, add all the Fields from F1 to F15 and divide that
total by the total number of fields which were more than 0.
I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
number of fields which is more than 0 ?
Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still doesn't
give me the # of feilds for that one record which is greater than zero...
I can't think of any solution except to put these fields in another table.
Let me know if i'm missing something.
ThanksI don't understand you. Can you post DDL+ sample data + expected result?
Did you mean ROWS not FIELDS?
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>|||Your table design is very questionable but assuming no NULLs and no
negatives, try this:
SELECT
(f1 + f2 + f3 + f4 + f5 + f6 + f7 + f8 + f9
+ f10 + f11 + f12 + f13 + f14 + f15)
/
(SIGN(f1) + SIGN(f2) + SIGN(f3) + SIGN(f4) + SIGN(f5)
+ SIGN(f6) + SIGN(f7) + SIGN(f8) + SIGN(f9) + SIGN(f10)
+ SIGN(f11) + SIGN(f12) + SIGN(f13) + SIGN(f14) + SIGN(f15))
FROM Table1 ;
If you have to cope with negatives then change SIGN(x) to ABS(SIGN(x)).
In general for conditional execution take a look at the CASE
expression.
David Portas
SQL Server MVP
--|||you could use case
select (f1 + f2 + f3 + f4) / case when f1 = 0 and f2 = 0 and f3 = 0 and f4 =
0 then 1 else case when f1 = 0 then 0 else 1 end + case when f2 = 0 then 0
else 1 end + case when f3 = 0 then 0 else 1 end + case when f4 = 0 then 0
else 1 end end from tableName
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>|||Here are a couple of options:
select id,
(f1*s1 + f2*s2 + f3*s3 + ... + f15*s15)
/ (s1 + s2 + s3 + ... + s15) as avgpos
from (select *,
1+sign(sign(f1) -1) as s1,
1+sign(sign(f2) -1) as s2,
1+sign(sign(f3) -1) as s3,
..
1+sign(sign(f15)-1) as s15
from (select id,
isnull(f1, 0) as f1,
isnull(f2, 0) as f2,
isnull(f3, 0) as f3,
..
isnull(f15, 0) as f15
from t1) as d1) as d2
select id, avg(val) as avgpos
from (select id, n,
case n
when 1 then f1
when 2 then f2
when 3 then f3
..
when 15 then f15
end as val
from t1,
(select 1 as n
union all select 2
union all select 3
..
union all select 15) as nums) as d
where val > 0
group by id
BG, SQL Server MVP
www.SolidQualityLearning.com
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
> doesn't give me the # of feilds for that one record which is greater than
> zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
Interesting SELECT question
Does ANSI SQL support conditional selects ?
I have a table with say 15 fields
tblTale1
ID
F1
F2
F3
F4
F5
F6
.
.
.
F15
What I want to do is, add all the Fields from F1 to F15 and divide that
total by the total number of fields which were more than 0.
I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
number of fields which is more than 0 ?
Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still doesn't
give me the # of feilds for that one record which is greater than zero...
I can't think of any solution except to put these fields in another table.
Let me know if i'm missing something.
ThanksI don't understand you. Can you post DDL+ sample data + expected result?
Did you mean ROWS not FIELDS?
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>|||Your table design is very questionable but assuming no NULLs and no
negatives, try this:
SELECT
(f1 + f2 + f3 + f4 + f5 + f6 + f7 + f8 + f9
+ f10 + f11 + f12 + f13 + f14 + f15)
/
(SIGN(f1) + SIGN(f2) + SIGN(f3) + SIGN(f4) + SIGN(f5)
+ SIGN(f6) + SIGN(f7) + SIGN(f8) + SIGN(f9) + SIGN(f10)
+ SIGN(f11) + SIGN(f12) + SIGN(f13) + SIGN(f14) + SIGN(f15))
FROM Table1 ;
If you have to cope with negatives then change SIGN(x) to ABS(SIGN(x)).
In general for conditional execution take a look at the CASE
expression.
--
David Portas
SQL Server MVP
--|||you could use case
select (f1 + f2 + f3 + f4) / case when f1 = 0 and f2 = 0 and f3 = 0 and f4 =0 then 1 else case when f1 = 0 then 0 else 1 end + case when f2 = 0 then 0
else 1 end + case when f3 = 0 then 0 else 1 end + case when f4 = 0 then 0
else 1 end end from tableName
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
doesn't
> give me the # of feilds for that one record which is greater than zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>|||Here are a couple of options:
select id,
(f1*s1 + f2*s2 + f3*s3 + ... + f15*s15)
/ (s1 + s2 + s3 + ... + s15) as avgpos
from (select *,
1+sign(sign(f1) -1) as s1,
1+sign(sign(f2) -1) as s2,
1+sign(sign(f3) -1) as s3,
...
1+sign(sign(f15)-1) as s15
from (select id,
isnull(f1, 0) as f1,
isnull(f2, 0) as f2,
isnull(f3, 0) as f3,
...
isnull(f15, 0) as f15
from t1) as d1) as d2
select id, avg(val) as avgpos
from (select id, n,
case n
when 1 then f1
when 2 then f2
when 3 then f3
...
when 15 then f15
end as val
from t1,
(select 1 as n
union all select 2
union all select 3
...
union all select 15) as nums) as d
where val > 0
group by id
--
BG, SQL Server MVP
www.SolidQualityLearning.com
"ms news group" <hemang@.hemang.net> wrote in message
news:OcBE7DigFHA.1284@.TK2MSFTNGP14.phx.gbl...
> Hello
> Does ANSI SQL support conditional selects ?
> I have a table with say 15 fields
> tblTale1
> ID
> F1
> F2
> F3
> F4
> F5
> F6
> .
> .
> .
> F15
>
> What I want to do is, add all the Fields from F1 to F15 and divide that
> total by the total number of fields which were more than 0.
> I know how to sum you can use Sum(f1+f2+f3...) , but how can I get the
> number of fields which is more than 0 ?
>
> Even if I put 15 conditions, if f1 > 0 or f2 > 0 or f3 > 0, it still
> doesn't give me the # of feilds for that one record which is greater than
> zero...
>
> I can't think of any solution except to put these fields in another table.
> Let me know if i'm missing something.
> Thanks
>
>
Interesting scenario - Multiple files updating one table
i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...
Here is my problem...
The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.
File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...
The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.
What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I
Regards
Mike
You have more than one way to accomplish that, I think.
You could add an extra column to the staging table that will act as a flag to indicate whether a row was properly updated by the 2 file or not; then further steps should filter the rows based on the value of that column.
Or...
Why you don't create 2 staging tables; one for each file. Then you can use SQL statements to join them, perform some data quality checks and decide which rows are going to be processed and which ones would be rejected.
|||You could also use a Merge Join transform with an inner join prior to loading the table - you would take the good record pipelines from the two flat file sources into the Merge Join and set up an inner join on the key from each file. The pipeline output from the Merge Join would then have only the ten records that had a key match and all 9 columns, which you can then load to the SQL table desitination.|||Thanks, I used the flag method and its doing fine.
Much Appreciated Rafael
Mike
Interesting scenario - Multiple files updating one table
i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...
Here is my problem...
The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.
File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...
The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.
What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I
Regards
Mike
You have more than one way to accomplish that, I think.
You could add an extra column to the staging table that will act as a flag to indicate whether a row was properly updated by the 2 file or not; then further steps should filter the rows based on the value of that column.
Or...
Why you don't create 2 staging tables; one for each file. Then you can use SQL statements to join them, perform some data quality checks and decide which rows are going to be processed and which ones would be rejected.
|||You could also use a Merge Join transform with an inner join prior to loading the table - you would take the good record pipelines from the two flat file sources into the Merge Join and set up an inner join on the key from each file. The pipeline output from the Merge Join would then have only the ten records that had a key match and all 9 columns, which you can then load to the SQL table desitination.|||Thanks, I used the flag method and its doing fine.
Much Appreciated Rafael
Mike
Interesting question about memory usage - standard edition
I have sql server 2000 standard edition (windows 2000
server - NOT advanced) on the server which has 2560MB of
memory. This edition has restriction of 2048MB of memory
it can use (max memory parameter set to this value).
I'm making performance analysis of whether we need to
upgrade memory on this server (and use /awe) - we're
about to add some additional load to the system.
I collected statistics using performance monitor. Among
many counters, I was monitoring Available Memory (MB).
This value varies from 325 to 660.
Given that 512MB of memory is totally invisible to the
sql server, does this mean that we're hitting the wall
regarding memory for sql server? I also noticed something
strange, not aligned with bol, that sql server doesn't
release memory at all, because even if the server is
idle, available memory doesn't grow higher than 660MB
(there is nothing other than sql server that uses this
memory - dedicated database server)... Since performance
monitor is in this case useless (or not?), how can you
determine how much memory database engine really
uses/needs?
Thanks
The main things to look at are still Pages/sec (should be near 0 most of the
time) and Buffer Cache Hit ratio (>95% but depends on what type of workload
and you really need to be on SP3 for the number to mean anything). SQL
Server won't give back memory unless there is competition for it so its
perfectly normal for the Available Memory to stay at 660MB. Page life
expectancy,Memory grants outstanding and free buffers can also help to
narrow down memory contention
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Peja" <peja@.sympatico.ca> wrote in message
news:0f4b01c42621$2d719050$a601280a@.phx.gbl...
> Hi,
> I have sql server 2000 standard edition (windows 2000
> server - NOT advanced) on the server which has 2560MB of
> memory. This edition has restriction of 2048MB of memory
> it can use (max memory parameter set to this value).
> I'm making performance analysis of whether we need to
> upgrade memory on this server (and use /awe) - we're
> about to add some additional load to the system.
> I collected statistics using performance monitor. Among
> many counters, I was monitoring Available Memory (MB).
> This value varies from 325 to 660.
> Given that 512MB of memory is totally invisible to the
> sql server, does this mean that we're hitting the wall
> regarding memory for sql server? I also noticed something
> strange, not aligned with bol, that sql server doesn't
> release memory at all, because even if the server is
> idle, available memory doesn't grow higher than 660MB
> (there is nothing other than sql server that uses this
> memory - dedicated database server)... Since performance
> monitor is in this case useless (or not?), how can you
> determine how much memory database engine really
> uses/needs?
> Thanks
|||So Buffer Cache is not reliable if you have 2000 SP2?
>--Original Message--
>The main things to look at are still Pages/sec (should be
near 0 most of the
>time) and Buffer Cache Hit ratio (>95% but depends on
what type of workload
>and you really need to be on SP3 for the number to mean
anything). SQL
>Server won't give back memory unless there is competition
for it so its
>perfectly normal for the Available Memory to stay at
660MB. Page life
>expectancy,Memory grants outstanding and free buffers can
also help to[vbcol=seagreen]
>narrow down memory contention
>--
>HTH
>Jasper Smith (SQL Server MVP)
>I support PASS - the definitive, global
>community for SQL Server professionals -
>http://www.sqlpass.org
>
>"Peja" <peja@.sympatico.ca> wrote in message
>news:0f4b01c42621$2d719050$a601280a@.phx.gbl...
something
>
>.
>
sql
Interesting question about memory usage - standard edition
I have sql server 2000 standard edition (windows 2000
server - NOT advanced) on the server which has 2560MB of
memory. This edition has restriction of 2048MB of memory
it can use (max memory parameter set to this value).
I'm making performance analysis of whether we need to
upgrade memory on this server (and use /awe) - we're
about to add some additional load to the system.
I collected statistics using performance monitor. Among
many counters, I was monitoring Available Memory (MB).
This value varies from 325 to 660.
Given that 512MB of memory is totally invisible to the
sql server, does this mean that we're hitting the wall
regarding memory for sql server? I also noticed something
strange, not aligned with bol, that sql server doesn't
release memory at all, because even if the server is
idle, available memory doesn't grow higher than 660MB
(there is nothing other than sql server that uses this
memory - dedicated database server)... Since performance
monitor is in this case useless (or not?), how can you
determine how much memory database engine really
uses/needs?
ThanksThe main things to look at are still Pages/sec (should be near 0 most of the
time) and Buffer Cache Hit ratio (>95% but depends on what type of workload
and you really need to be on SP3 for the number to mean anything). SQL
Server won't give back memory unless there is competition for it so its
perfectly normal for the Available Memory to stay at 660MB. Page life
expectancy,Memory grants outstanding and free buffers can also help to
narrow down memory contention
--
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Peja" <peja@.sympatico.ca> wrote in message
news:0f4b01c42621$2d719050$a601280a@.phx.gbl...
> Hi,
> I have sql server 2000 standard edition (windows 2000
> server - NOT advanced) on the server which has 2560MB of
> memory. This edition has restriction of 2048MB of memory
> it can use (max memory parameter set to this value).
> I'm making performance analysis of whether we need to
> upgrade memory on this server (and use /awe) - we're
> about to add some additional load to the system.
> I collected statistics using performance monitor. Among
> many counters, I was monitoring Available Memory (MB).
> This value varies from 325 to 660.
> Given that 512MB of memory is totally invisible to the
> sql server, does this mean that we're hitting the wall
> regarding memory for sql server? I also noticed something
> strange, not aligned with bol, that sql server doesn't
> release memory at all, because even if the server is
> idle, available memory doesn't grow higher than 660MB
> (there is nothing other than sql server that uses this
> memory - dedicated database server)... Since performance
> monitor is in this case useless (or not?), how can you
> determine how much memory database engine really
> uses/needs?
> Thanks|||So Buffer Cache is not reliable if you have 2000 SP2'
>--Original Message--
>The main things to look at are still Pages/sec (should be
near 0 most of the
>time) and Buffer Cache Hit ratio (>95% but depends on
what type of workload
>and you really need to be on SP3 for the number to mean
anything). SQL
>Server won't give back memory unless there is competition
for it so its
>perfectly normal for the Available Memory to stay at
660MB. Page life
>expectancy,Memory grants outstanding and free buffers can
also help to
>narrow down memory contention
>--
>HTH
>Jasper Smith (SQL Server MVP)
>I support PASS - the definitive, global
>community for SQL Server professionals -
>http://www.sqlpass.org
>
>"Peja" <peja@.sympatico.ca> wrote in message
>news:0f4b01c42621$2d719050$a601280a@.phx.gbl...
>> Hi,
>> I have sql server 2000 standard edition (windows 2000
>> server - NOT advanced) on the server which has 2560MB of
>> memory. This edition has restriction of 2048MB of memory
>> it can use (max memory parameter set to this value).
>> I'm making performance analysis of whether we need to
>> upgrade memory on this server (and use /awe) - we're
>> about to add some additional load to the system.
>> I collected statistics using performance monitor. Among
>> many counters, I was monitoring Available Memory (MB).
>> This value varies from 325 to 660.
>> Given that 512MB of memory is totally invisible to the
>> sql server, does this mean that we're hitting the wall
>> regarding memory for sql server? I also noticed
something
>> strange, not aligned with bol, that sql server doesn't
>> release memory at all, because even if the server is
>> idle, available memory doesn't grow higher than 660MB
>> (there is nothing other than sql server that uses this
>> memory - dedicated database server)... Since performance
>> monitor is in this case useless (or not?), how can you
>> determine how much memory database engine really
>> uses/needs?
>> Thanks
>
>.
>