Wednesday, March 21, 2012

Loop Through Flat Files Based On A Date Range

Hello,

I currently have a For Each File container that loops through all files from a specific directory. The files have a naming convention that looks like this;

CDNSC.CDNSC.SC00015.01012007

The last segment of the file name is the date of the data in the file (mmddyyyy). The create date for these files is always a day later than indicated in the file name.

What I would like to do is to have more control over the 'range' of files that are looped through by using the date portion of the file name to define what group of files should be looped through. Ideally, I would like to have a 'StartDate' variable and an 'EndDate' variable that I could define at run time for the package, and the package would loop through all of the files where the date portion of the file name fell between 'StartDate' and 'EndDate'.

Any ideas on this?

Thank you for your help!

cdun2

One way is to move the files to an archive directory when you're done with them. That way you only pick up the files you need to process. Just throwing that idea out there.|||Thanks for your response. One thing that I have to be prepared for is that I may have to 'reload' data. The date range for any reloading could be anything.|||Well, you could (and perhaps should?) stage the data from the flat files in a staging table. Then, once in the staging table you can keep n number of loads around, all marked by either the load date, or a unique load number. After n number of loads, you can delete the old data. Each row in the staging table would also contain the source file name.

Or, you could simply load a table with the load date and a filename. That way, you've got a table that tells you which files to grab for a given load.
Load File
1 xyz.txt
1 fhs.txt
2 jfb.txt

The above still assumes that you "archive" the files into a subdirectory or somewhere.|||The data from the flat files will wind up in one ore more sql server tables. One idea I was considering was to stored the file names in a table, and maybe parse the date portion of the file name into a column. Then somehow use that as a lookup table to determine which files would be processed.|||

cdun2 wrote:

The data from the flat files will wind up in one ore more sql server tables. One idea I was considering was to stored the file names in a table, and maybe parse the date portion of the file name into a column. Then somehow use that as a lookup table to determine which files would be processed.

Right, but the point is that your first data flow simply loads the data from the files AS IS into a table. Then in the data flow that you use today, you point them to the staging table instead of the flat file(s).|||Oh, I see. The data that I need to process is a subset, and will be aggregated for reporting. The concern that I have is that somewhere down the road, the reporting requirements may change, and the users may need another column from the data. I'm thinking that if that happens, I'll need to add the additional column to the subset table, and reload everything.|||

cdun2 wrote:

Oh, I see. The data that I need to process is a subset, and will be aggregated for reporting. The concern that I have is that somewhere down the road, the reporting requirements may change, and the users may need another column from the data. I'm thinking that if that happens, I'll need to add the additional column to the subset table, and reload everything.

You're going to have to add the column in the flat file source anyway, would be one argument.

Regardless, this is just one way to tackle this problem. There are other ways I'm sure.|||

Actually, the destination table will have just a subset of the columns that are found in the flat files. If a need is found for an addional column of data, it will already be in the flat files. I'll just need to add it to the destination table.

What kind of component could I use in a DataFlow Task to read filenames?

Thanks again.

|||

cdun2 wrote:

Actually, the destination table will have just a subset of the columns that are found in the flat files. If a need is found for an addional column of data, it will already be in the flat files. I'll just need to add it to the destination table.

What kind of component could I use in a DataFlow Task to read filenames?

Thanks again.

If this is the case, then the staging table example still works. In your CONTROL flow, you'd use a foreach loop to spin through the files. Each filename can be put in a variable, which can be used in a data flow inside the foreach loop. Then, using a derived column in the data flow, you'd have access to the variable that contains the current filename.|||

cdun2 wrote:


Hello,

I currently have a For Each File container that loops through all files from a specific directory. The files have a naming convention that looks like this;

CDNSC.CDNSC.SC00015.01012007

The last segment of the file name is the date of the data in the file (mmddyyyy). The create date for these files is always a day later than indicated in the file name.

What I would like to do is to have more control over the 'range' of files that are looped through by using the date portion of the file name to define what group of files should be looped through. Ideally, I would like to have a 'StartDate' variable and an 'EndDate' variable that I could define at run time for the package, and the package would loop through all of the files where the date portion of the file name fell between 'StartDate' and 'EndDate'.

Any ideas on this?

Thank you for your help!

cdun2

Cdun2,

This is an alternative approach to the issue of processing only the files that are within a start/end date range. It uses Expression and precedence constraints:

http://rafael-salas.blogspot.com/2007/02/ssis-loop-through-files-in-date-range.html

BTW,

Just to save extra work down the road; i would recomment to import all the columns of the files from the begining.

|||Thanks again for your input. I'll dig a little deeper into this next week. As far as bringing in all of the columns of data, the attempt is to aggregate so that we can meet a reporting need, and reduce the number of rows of data that we need to store. Its a resource/performance issue for us. I can't bring in any more than I need.|||

cdun2 wrote:

I can't bring in any more than I need.

But you can. To Rafael's point, you can at least have ALL of the flat file columns mapped in the connection manager. You don't have to do anything with them in the data flow, but at least they are there if you need to pick them up later; you won't have to redefine the connection manager.|||

Phil Brammer wrote:

cdun2 wrote:

I can't bring in any more than I need.

But you can. To Rafael's point, you can at least have ALL of the flat file columns mapped in the connection manager. You don't have to do anything with them in the data flow, but at least they are there if you need to pick them up later; you won't have to redefine the connection manager.

Yes, that makes sense. Thanks again.

|||

I took a look at the example here; http://rafael-salas.blogspot.com/2007/02/ssis-loop-through-files-in-date-range.html

Thank you for providing this example. I am having a difficult time making the bridge between the precendent constraint, and the Data Flow Task where the content of the qualifying flat files are inserted into the SQL Server table.

No comments:

Post a Comment