i am using a lookup component to do a typical SCD. Compare the Natural keys and if they are the same -- REdirect the rows and do whatever, If not present -- means the Error Rows -- redirect and do whatever.
WHen I use the component to do a Historical Load (which means -- there are no rows are in the Destination table) and put the Memory to Partial Cache -- the Data Flow STalls after about 46,000 rows, it just doesnt complete after that. But the moment I switch it to Full Cache -- it flows -- But Partial is what I am supposed to be using -- keeping in mind -- the Incremental Loads. Why does the component stall ?
I had used Partial Cache in an earlier project -- with a 18 Million Row Table --(albeit for incremental load) and it worked fine (though is was slow -- but tleast it worked) -- but now I am trying to load just 300,000 rows but it stalls.
I am using a 2GB RAM machine -- and set the Memory to 750 MB/500 MB nothing worked
I tried two different machines -- same thing happened.
Any insight will be appreciated.
I am jut wondering why you said that the partial cache is required in the historical load. I always use (after making sure the server have enough memory available) full cache.
How is your LKup set up? you should be using the only required columns; you said this is for a SCD; so you should be uisng a query with the busines/natural key columns only (please provide the number of columns and data type of the required columns). Also, how is the source component set up? are you using 'fast load'?
|||Hmmm so you use Full Cache always? If I leave it at Full Cache -- from next time onwards during Incremental Loads -- it will load all 290,000 rows into memory and possibly less than 10 rows have changed. I dont want to do that.
Yes I bring ONLY the required columns actually 4 columns NaturalKey, InferredMember, Hash1 and Hash2 -- all Integers
Yes I use Fast Load.
I use OLEDB for my Source.
|||
JaguarRDA wrote:
Hmmm so you use Full Cache always? If I leave it at Full Cache -- from next time onwards during Incremental Loads -- it will load all 290,000 rows into memory and possibly less than 10 rows have changed. I dont want to do that.
Yes I bring ONLY the required columns actually 4 columns NaturalKey, InferredMember, Hash1 and Hash2 -- all Integers
Yes I use Fast Load.
I use OLEDB for my Source.
An INT32 field, though (call it "natural key") will only take up 1.1MB of memory to store 290,000 rows. Not too bad, eh?|||
That is exactly my point. Even with 4 columns SSIS should be able to cache the whole result set in no time. I would recommned you to watch the progress tab in BIDS to check how long the lookup caching takes. Check also the task manager to see if there is any other process taking to much resources from the box. You can try to replace the OLE DB destination with a rowcount transformation just to test the transformation speed. You can remove the lookup and destination and use the same row count to mesaure the 'reading' speed from the source component.
Make sure you are using fast load in your OLE DB destination.
No comments:
Post a Comment