112 Million records in Ledger Entry Dimension table....

Hi,

Nav W1 4.0 SP1 (4.0 SP2)

One of our subsidiary that uses manufacturing module has accumulated 112,000,000 records in the Ledger Entry Dimension table in 2 years of operation, the operation is simple (700 different products, BOM not exceeding 10 componants), we manufacture PVC products (Tanks, pallets, bin etc…), biggest production is water tank with 10,000 units per year. We have a mix of std cost for overheads and actual cost for production

We also face some problems re the adjust cost, we do it monthly and it took in average 60 Hrs to complete…

Question at 100$:

  1. Is it normal to have that huge number of records, what affect this number?

  2. Is there a relation between time taken for Adjust Cost and number of records in Ledger Entry Dimension?!

Rgds

Nicolas

hi , I have exp in working with a Navision client having G/L Entries more than 25,000,000 and as we were using a max of 8 dimensions then the Ledger entry records were around 90,000,000.

The ldeger Entry records depends on the dimsnions used while posting. So if ur G/L Entry has x number of records and u are having y dimensions , then your ledger entry diension table will have <= x*y records

So its pretty normal.

And i don’t think that Adjust Cost Batch job depends on the Ledger Entry Dimension table size. It depends on ILE, VLE, and item appl. entry mostly.

We had more than 36000 item records and ILE size of almost 4,000,000. we used to run batch job daily and it hardly took more than 30 mins.

so even we had run that once a week, it wud have taken at max 2.5 hrs.

I’ll suggest you one small thing that even we did to optimize our batch job.

previously batch job was running for all 36000 items and finding entries in ILE for those items. so in place of this we made created a flag in item table. and whenever an ILE entry is passed for that item in the day, that flag was set to true .

and at night when batch job runs, it should consider only those items which has flag set as true( as its waste of time on iteratnig wid items which hav not been used in thwe day). and set the flag false after batch job completes.

Hope this should be enough hint for u to go ahead with optmizing it like magic.

thanks and regards

Anurag atish

technical consultant

Bangalore

Hi Anurag,

thanks for the tip, below tables detail, so you think we might gain that much magic performance?

Table No. Table Name No. of Records Record Size Size (KB) Optimization
5802 Value Entry 6,338,859 1,890 11,698,936 74.7
17 G/L Entry 12,805,140 1,157 14,468,464 59.4
355 Ledger Entry Dimension 112,516,874 178 19,598,600 50.1
339 Item Application Entry 261,005 335 85,480 74.9
32 Item Ledger Entry 229,913 1,909 428,600 57.4
27 Item 783 1,193 912 60.2

Tks

Nicolas

As a comparison, the numbers below are from a system that’s been in operation for 18 months. Adjust Cost is run nightly and usually takes 30 to 45 minutes. I don’t think table 355 has an impact on this. Likely more of a performance tuning issue.



Table No.



Table Name



No. of Records



5802



Value Entry



22,748,234



17



G/L Entry



37,450,203



355



Ledger Entry Dimension



233,967,710



339



Item Application Entry



3,869,378



32



Item Ledger Entry



3,602,684



27



Item



1349

Thanks for details babrown, I’m reassured that’s there is room for improvment, what is you server config?

If we cud do that then surely even u can.

we had almost the same scenario and table data, only in Item master we had 36000 items, but u have only 738.

and one more suggestion :

its regarding general databse management. Your db tables have to be optimal optimized to give best performance.

just check that the scan density on all keys are more than 80 on all said tables.

run following command and check for scan density

DBCC SHOWCONTIG ( table_name ) WITH [ ALL_INDEXES ]

(chk the sql syntax on net if this doesn’t wrk)

for tables having less than 80 scan density, u shud run DBCC REINDEX to optimize that.

BTW I did not mentioned that we’re on Native DB, with which DBMS are u working (Aruna & Babrown), if SQL does it make a big difference?

Yes this site is running SQL. I do think the oppurtunities for performance improvements are better with the SQL version.

The server config would not be relevant since you are not running SQL. I can still provide information if you feel it would be useful.

How big is your database? Maybe you need to think about moving to SQL.

David,

Database used is 58GB, Size is 100 GB, that looks pretty reasonable but doesn’t explain time to compute Adjust Cost! Consultants here trying to load some fixes from 4.0 SP3 that address the problem. Anura reading in, do you use manufacturing, which version of Nav & DBMS, is it possible to have more info on the implementation of that flag cause when I suggested that option to consultants they have arguments against it feasibility but without knowing much about details. Your input is very helpful, the sharing of knowledge is such a powerful tool!

Thanks

The site mentioned above is a 140 GB database with 100 GB used. Used increases about 7 - 8 GB per month. They are running a 4.0 SP3 database with 5.0 SP1 clients. We are currently involved in upgrading the database to 5.0 SP1 and expect to go live in about 3 weeks.

They run manufacturing so flagging and running individual items won’t work. Besides 4.0 and greater does this with the “Cost is Adjusted” flag.