Johnson Earls
2010-07-20 17:41:34 UTC
Hello,
I am hoping that someone on this list can enlighten me about the DTrace i/o provider. I am apparently not understanding where the i/o provider actually sits in the stack.
My understanding, from reading http://wikis.sun.com/display/DTrace/io+Provider, is that (discounting NFS for this purpose) the i/o provider probes fire when I/O is going to a specific disk device - in other words, below the filesystem layer.
I am using both the iopattern dtrace script and my own dtrace script modified from the iopattern script to gather read and write bandwidth statistics on a fibre channel SAN disk device. I do this through the io:genunix::start and io:genunix::done probe, filtering on args[1]->dev_statname for the disk device name and accumulating the bandwidth statistics from args[0]->b_count.
However, I am seeing occasional reports of i/o bandwidth anywhere from 40 to 100 GB per second, on a 4Gbps fiber channel device. I am obviously not understanding how the io provider is working.
My questions:
Do io:genunix::start and io:genunix::done fire *only* for physical device access, or will they fire when the request is being served by a Solaris cache?
If they fire on requests that are served by a cache, is there any way to determine this in order to filter those results out?
If they fire only on physical device access, what can explain the buffer counts being reported at many times higher than what the physical device is capable of?
Thanks for any pointers,
- Johnson
jearls-ieR7/***@public.gmane.org
I am hoping that someone on this list can enlighten me about the DTrace i/o provider. I am apparently not understanding where the i/o provider actually sits in the stack.
My understanding, from reading http://wikis.sun.com/display/DTrace/io+Provider, is that (discounting NFS for this purpose) the i/o provider probes fire when I/O is going to a specific disk device - in other words, below the filesystem layer.
I am using both the iopattern dtrace script and my own dtrace script modified from the iopattern script to gather read and write bandwidth statistics on a fibre channel SAN disk device. I do this through the io:genunix::start and io:genunix::done probe, filtering on args[1]->dev_statname for the disk device name and accumulating the bandwidth statistics from args[0]->b_count.
However, I am seeing occasional reports of i/o bandwidth anywhere from 40 to 100 GB per second, on a 4Gbps fiber channel device. I am obviously not understanding how the io provider is working.
My questions:
Do io:genunix::start and io:genunix::done fire *only* for physical device access, or will they fire when the request is being served by a Solaris cache?
If they fire on requests that are served by a cache, is there any way to determine this in order to filter those results out?
If they fire only on physical device access, what can explain the buffer counts being reported at many times higher than what the physical device is capable of?
Thanks for any pointers,
- Johnson
jearls-ieR7/***@public.gmane.org