Showing posts with label scenario. Show all posts
Showing posts with label scenario. Show all posts

Friday, March 23, 2012

Interesting scenario - Multiple files updating one table

Hi,

i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...

Here is my problem...

The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.

File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...

The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.

What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I

Regards
Mike

You have more than one way to accomplish that, I think.

You could add an extra column to the staging table that will act as a flag to indicate whether a row was properly updated by the 2 file or not; then further steps should filter the rows based on the value of that column.

Or...

Why you don't create 2 staging tables; one for each file. Then you can use SQL statements to join them, perform some data quality checks and decide which rows are going to be processed and which ones would be rejected.

|||You could also use a Merge Join transform with an inner join prior to loading the table - you would take the good record pipelines from the two flat file sources into the Merge Join and set up an inner join on the key from each file. The pipeline output from the Merge Join would then have only the ten records that had a key match and all 9 columns, which you can then load to the SQL table desitination.|||Thanks, I used the flag method and its doing fine.

Much Appreciated Rafael
Mike

Interesting scenario - Multiple files updating one table

Hi,

i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...

Here is my problem...

The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.

File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...

The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.

What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I

Regards
Mike

You have more than one way to accomplish that, I think.

You could add an extra column to the staging table that will act as a flag to indicate whether a row was properly updated by the 2 file or not; then further steps should filter the rows based on the value of that column.

Or...

Why you don't create 2 staging tables; one for each file. Then you can use SQL statements to join them, perform some data quality checks and decide which rows are going to be processed and which ones would be rejected.

|||You could also use a Merge Join transform with an inner join prior to loading the table - you would take the good record pipelines from the two flat file sources into the Merge Join and set up an inner join on the key from each file. The pipeline output from the Merge Join would then have only the ten records that had a key match and all 9 columns, which you can then load to the SQL table desitination.|||Thanks, I used the flag method and its doing fine.

Much Appreciated Rafael
Mike

Wednesday, March 7, 2012

Integration with WinForms to Printer

Hello,
I have the following scenario and I would like to know what my options
are:
I am using Reporting Services 2005 to generate a pre-printed report
(an invoice for example). My WinForms application (.NET 3.0) has a
screen through which the users can enter the report parameters (the
date, account number, etc.)
I want those parameters to be passed to reporting services to invoke
the invoice report, and I want the output go directly to the user's
local printer. The user will not see the report on a screen at all.
All the user will have to do is enter the parameters from the screen
and hit a custom button called 'Print' and the invoice will be
printed.
Also, another portion of the question: can I dynamically tell
reporting services which printer to spit the report to, and if so,
how?
All suggestions are appreciated.
Thanks,There is a good extension samples, called Printer Delivery Sample. You can go
through the code and the way to configure, which is in BOL help.
You can use the same to code in URL as well, so that it will take directly
to Printer.
Amarnath
"Jr. BTS dev!" wrote:
> Hello,
> I have the following scenario and I would like to know what my options
> are:
> I am using Reporting Services 2005 to generate a pre-printed report
> (an invoice for example). My WinForms application (.NET 3.0) has a
> screen through which the users can enter the report parameters (the
> date, account number, etc.)
> I want those parameters to be passed to reporting services to invoke
> the invoice report, and I want the output go directly to the user's
> local printer. The user will not see the report on a screen at all.
> All the user will have to do is enter the parameters from the screen
> and hit a custom button called 'Print' and the invoice will be
> printed.
> Also, another portion of the question: can I dynamically tell
> reporting services which printer to spit the report to, and if so,
> how?
> All suggestions are appreciated.
> Thanks,
>

Sunday, February 19, 2012

Integrating SQL RS 2005 into our apps

Microsoft Support

here is our scenario

We are an ISV that develops software products for our customers. We have two different flavors of Customers (Enterprise and Standard..) Ya, just like Microsoft

We would like to take SQL RS 2005 and integrate it within our apps

For our standard customers, we were thinking of packaging our canned reports into SQL 2005 Express Advanced versions

For our Enterprise customers, we were thinking of using the SQl 2005 Standard or Enterprise Reporting services.

As I was going down this path and making recommendations, I got an error trying to make this architecture work for us

Have SQL 2005 Report Server running on (Express- Advanced) talk to a SQL Express Catalog database on another box.

It appears that SQL 2005 Express can only work with local catalog database and local data sources. This is turning out to be a dampener to our migration to this server based reporting product for our low end customers.

Can you guys think of a workaround ? Our customer databases are going to be in SQL 2000 and it is going to be that way for a long time to come.. Meanwhile, we also want to adopt SQL2005 Reporting Services and move from Local reporting to Server based reporting.

Thoughts or ideas on helping me sell SQL 2005 Rs to folks here ?

Hi,

I am not from MS Support, but I think, that I am also allowed to answer? :-)

SQL Express wAF will only be allowed to access the local data storage - that is correct. You will be able to evaluate this in the feature comparison at www.microsoft.com/sql.

Why do you want a SQL Express for your Standard Users while connecting to another box?

Will this be the central data storage for your application? While couldn′t this be the Reporting Server also?

As another alternative you could use the reportviewer-control in your application.

Cheers,
Markus