126 -127 depending on what book you read.
A FCAL stool test, or fecal calprotectin test, is a diagnostic tool used to measure the level of calprotectin in stool samples. Calprotectin is a protein released by white blood cells during inflammation, making this test useful for detecting inflammatory bowel diseases (IBD) like Crohn's disease and ulcerative colitis. Elevated levels of calprotectin can indicate intestinal inflammation, helping healthcare providers differentiate between IBD and non-inflammatory conditions such as irritable bowel syndrome (IBS). The test is non-invasive and provides valuable information to guide further diagnostic and treatment decisions.
Calculating the usable capacity of a NetApp filer is quite a complex task. Part of the problem is defining what is meant my "usable". If you look at it from a user perspective then you would want to store x GB of data and do not want to take into account things like SnapShots. From Administrators perspective you would need to take into consideration SnapShots and also maximum utilisation of the storage because it is a bad idea to run a filer with no free space. To further complicate things, I would really need to know what type of data you want to store on the filer to answer the questions. If will vary dramatically if you are storing file of application data and if you are using SnapShots. However below is how you would calculate the storage space for normal file data. 1. Obtain the formatted disk size for the disks in the filer, this can be done from FilerView, Operations manager or from the "sysconfig -r" command (see below). In the example below the system has 72GB 10K FC disks and this shows the formatted size (10th parameter) of 6,8000 MB (or 66.4GB). FILER1> sysconfig -r Spare disks RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- Spare disks for block or zoned checksum traditional volumes or aggregates spare 0a.16 0a 1 0 FC:A - FCAL 10000 68000/139264000 69536/142410400 (not zeroed) 2. You need to work out the number of usable disks you will have. This will be the total number of disks less parity and spare disks. For example if you have a shelf of 14 disks, you would loose 2 disks, 1 as the parity, 1 as the double parity disk (I assume you will be using RAID-DP and not RAID4) and you would loose 1 as a hot spare. You may have more than 1 hot spare but 1 is ok for 1 shelf. Therefore from a total of 14 disks 11 are usable. You may also have more parity disks than the two in the example. This will depend on the size and number of raid groups, the rule is that you will lose 2 parity disks for each raid group, lets keep it simple and say you have just one raid group. So at the moment we know that we will have 730.4GB (66.4*11). 3. Next we need to take into account the WAFL reserve and Aggregate SnapShot space. By default WAFL reserves 10% for its own work and by default new aggregates have a reserve of 5% (although you can increase or remove this altogether). So it we take our 703.4GB figure we will have 620.84GB left (730.4-(730.4*0.15)). 4. Although we have usable capacity of the aggregate we have not taken into account the volume SnapShot space. By default this is 20%, taking are last figure of 620.84GB we will have 496.67GB left (620.84-(620.84*0.2)). 5. So at this point we know we can store just under 0.5TB of data (or just less than half of the marketing size of a shelf of disks). However it would not be good practice to run a filer at 100% capacity as you will have performance issue not to mention administration issues. I would not recommend running a filer anymore than 90% at an absolute maximum and really 80% would be better. Considering that there may be some data growth, 70% would be better as a maximum utilisation figure. Taking this maximum utilisation figure you would be left with 347.67GB (496.67 - (496.67*0.3)). Hope this all makes sense, it would also be worth considering things like thin provisioning, quite a large amount of savings can be made by using thin provisioning. See www.secalc.com which will show an example of how much storage can be saved. Just make sure you have some sort of alerting software (such a NetApp's Operations Manager) so that you don't suddenly run out of space because you have given away more storage than you actually have.