summaryrefslogtreecommitdiffstats
path: root/documentation/profile-manual
diff options
context:
space:
mode:
authorScott Rifenbark <scott.m.rifenbark@intel.com>2013-01-17 20:57:16 (GMT)
committerRichard Purdie <richard.purdie@linuxfoundation.org>2013-01-27 13:56:03 (GMT)
commitacb86de34e3262cd6233da66bf2fa0b9c8a22171 (patch)
tree91bcda12352177ef14ce75064c4fdef3aef941f0 /documentation/profile-manual
parentfcf615546f88e28caa56b2e977c183c792e071a6 (diff)
downloadpoky-acb86de34e3262cd6233da66bf2fa0b9c8a22171.tar.gz
profile-manual: Added oprofile usage section.
No re-writing at all. (From yocto-docs rev: f42230e3665903a7603212696949241244555f8b) Signed-off-by: Scott Rifenbark <scott.m.rifenbark@intel.com> Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'documentation/profile-manual')
-rw-r--r--documentation/profile-manual/profile-manual-usage.xml552
1 files changed, 552 insertions, 0 deletions
diff --git a/documentation/profile-manual/profile-manual-usage.xml b/documentation/profile-manual/profile-manual-usage.xml
index 39a0c5c..f2bc868 100644
--- a/documentation/profile-manual/profile-manual-usage.xml
+++ b/documentation/profile-manual/profile-manual-usage.xml
@@ -2221,6 +2221,558 @@
2221 </section> 2221 </section>
2222</section> 2222</section>
2223 2223
2224<section id='profile-manual-oprofile'>
2225 <title>oprofile</title>
2226
2227 <para>
2228 oprofile itself is a command-line application that runs on the
2229 target system.
2230 </para>
2231
2232 <section id='oprofile-setup'>
2233 <title>Setup</title>
2234
2235 <para>
2236 For this section, we'll assume you've already performed the
2237 basic setup outlined in the
2238 "<link linkend='profile-manual-general-setup'>General Setup</link>"
2239 section.
2240 </para>
2241
2242 <para>
2243 For the the section that deals with oprofile from the command-line,
2244 we assume you've ssh'ed to the host and will be running
2245 oprofile on the target.
2246 </para>
2247
2248 <para>
2249 oprofileui (oprofile-viewer) is a GUI-based program that runs
2250 on the host and interacts remotely with the target.
2251 See the oprofileui section for the exact steps needed to
2252 install oprofileui on the host.
2253 </para>
2254 </section>
2255
2256 <section id='oprofile-basic-usage'>
2257 <title>Basic Usage</title>
2258
2259 <para>
2260 Oprofile as configured in Yocto is a system-wide profiler
2261 (i.e. the version in Yocto doesn't yet make use of the
2262 perf_events interface which would allow it to profile
2263 specific processes and workloads). It's relies on hardware
2264 counter support in the hardware (but can fall back to a
2265 timer-based mode), which means that it doesn't take
2266 advantage of tracepoints or other event sources for example.
2267 </para>
2268
2269 <para>
2270 It consists of a kernel module that collects samples and a
2271 userspace daemon that writes the sample data to disk.
2272 </para>
2273
2274 <para>
2275 The 'opcontrol' shell script is used for transparently
2276 managing these components and starting and stopping
2277 profiles, and the 'opreport' command is used to
2278 display the results.
2279 </para>
2280
2281 <para>
2282 The oprofile daemon should already be running, but before
2283 you start profiling, you may need to change some settings
2284 and some of these settings may require the daemon not
2285 be running. One of these settings is the path the the
2286 vmlinux file, which you'll want to set using the --vmlinux
2287 option if you want the kernel profiled:
2288 <literallayout class='monospaced'>
2289 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2290 The profiling daemon is currently active, so changes to the configuration
2291 will be used the next time you restart oprofile after a --shutdown or --deinit.
2292 </literallayout>
2293 You can check if vmlinux file: is set using opcontrol --status:
2294 <literallayout class='monospaced'>
2295 root@crownbay:~# opcontrol --status
2296 Daemon paused: pid 1334
2297 Separate options: library
2298 vmlinux file: none
2299 Image filter: none
2300 Call-graph depth: 6
2301 </literallayout>
2302 If it's not, you need to shutdown the daemon, add the setting
2303 and restart the daemon:
2304 <literallayout class='monospaced'>
2305 root@crownbay:~# opcontrol --shutdown
2306 Killing daemon.
2307
2308 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2309 root@crownbay:~# opcontrol --start-daemon
2310 Using default event: CPU_CLK_UNHALTED:100000:0:1:1
2311 Using 2.6+ OProfile kernel interface.
2312 Reading module info.
2313 Using log file /var/lib/oprofile/samples/oprofiled.log
2314 Daemon started.
2315 </literallayout>
2316 If we get the status again we now see our updated settings:
2317 <literallayout class='monospaced'>
2318 root@crownbay:~# opcontrol --status
2319 Daemon paused: pid 1649
2320 Separate options: library
2321 vmlinux file: /boot/vmlinux-3.4.11-yocto-standard
2322 Image filter: none
2323 Call-graph depth: 6
2324 </literallayout>
2325 We're now in a position to run a profile. For that we used
2326 'opcontrol --start':
2327 <literallayout class='monospaced'>
2328 root@crownbay:~# opcontrol --start
2329 Profiler running.
2330 </literallayout>
2331 In another window, run our wget workload:
2332 <literallayout class='monospaced'>
2333 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; wget <ulink url='http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2'>http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2</ulink>; sync
2334 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2335 linux-2.6.19.2.tar.b 100% |*******************************| 41727k 0:00:00 ETA
2336 </literallayout>
2337 To stop the profile we use 'opcontrol --shudown', which not
2338 only stops the profile but shuts down the daemon as well:
2339 <literallayout class='monospaced'>
2340 root@crownbay:~# opcontrol --start
2341 Stopping profiling.
2342 Killing daemon.
2343 </literallayout>
2344 Oprofile writes sample data to /var/lib/oprofile/samples,
2345 which you can look at if you're interested in seeing how the
2346 samples are structured. This is also interesting because
2347 it's related to how you dive down to get further details
2348 about specific executables in OProfile.
2349 </para>
2350
2351 <para>
2352 To see the default display output for a profile, simply type
2353 'opreport', which will show the results using the data in
2354 /var/lib/oprofile/samples:
2355 <literallayout class='monospaced'>
2356 root@crownbay:~# opreport
2357
2358 WARNING! The OProfile kernel driver reports sample buffer overflows.
2359 Such overflows can result in incorrect sample attribution, invalid sample
2360 files and other symptoms. See the oprofiled.log for details.
2361 You should adjust your sampling frequency to eliminate (or at least minimize)
2362 these overflows.
2363 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2364 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2365 CPU_CLK_UNHALT...|
2366 samples| %|
2367 ------------------
2368 464365 79.8156 vmlinux-3.4.11-yocto-standard
2369 65108 11.1908 oprofiled
2370 CPU_CLK_UNHALT...|
2371 samples| %|
2372 ------------------
2373 64416 98.9372 oprofiled
2374 692 1.0628 libc-2.16.so
2375 36959 6.3526 no-vmlinux
2376 4378 0.7525 busybox
2377 CPU_CLK_UNHALT...|
2378 samples| %|
2379 ------------------
2380 2844 64.9612 libc-2.16.so
2381 1337 30.5391 busybox
2382 193 4.4084 ld-2.16.so
2383 2 0.0457 libnss_compat-2.16.so
2384 1 0.0228 libnsl-2.16.so
2385 1 0.0228 libnss_files-2.16.so
2386 4344 0.7467 bash
2387 CPU_CLK_UNHALT...|
2388 samples| %|
2389 ------------------
2390 2657 61.1648 bash
2391 1665 38.3287 libc-2.16.so
2392 18 0.4144 ld-2.16.so
2393 3 0.0691 libtinfo.so.5.9
2394 1 0.0230 libdl-2.16.so
2395 3118 0.5359 nf_conntrack
2396 686 0.1179 matchbox-terminal
2397 CPU_CLK_UNHALT...|
2398 samples| %|
2399 ------------------
2400 214 31.1953 libglib-2.0.so.0.3200.4
2401 114 16.6181 libc-2.16.so
2402 79 11.5160 libcairo.so.2.11200.2
2403 78 11.3703 libgdk-x11-2.0.so.0.2400.8
2404 51 7.4344 libpthread-2.16.so
2405 45 6.5598 libgobject-2.0.so.0.3200.4
2406 29 4.2274 libvte.so.9.2800.2
2407 25 3.6443 libX11.so.6.3.0
2408 19 2.7697 libxcb.so.1.1.0
2409 17 2.4781 libgtk-x11-2.0.so.0.2400.8
2410 12 1.7493 librt-2.16.so
2411 3 0.4373 libXrender.so.1.3.0
2412 671 0.1153 emgd
2413 411 0.0706 nf_conntrack_ipv4
2414 391 0.0672 iptable_nat
2415 378 0.0650 nf_nat
2416 263 0.0452 Xorg
2417 CPU_CLK_UNHALT...|
2418 samples| %|
2419 ------------------
2420 106 40.3042 Xorg
2421 53 20.1521 libc-2.16.so
2422 31 11.7871 libpixman-1.so.0.27.2
2423 26 9.8859 emgd_drv.so
2424 16 6.0837 libemgdsrv_um.so.1.5.15.3226
2425 11 4.1825 libEMGD2d.so.1.5.15.3226
2426 9 3.4221 libfb.so
2427 7 2.6616 libpthread-2.16.so
2428 1 0.3802 libudev.so.0.9.3
2429 1 0.3802 libdrm.so.2.4.0
2430 1 0.3802 libextmod.so
2431 1 0.3802 mouse_drv.so
2432 .
2433 .
2434 .
2435 9 0.0015 connmand
2436 CPU_CLK_UNHALT...|
2437 samples| %|
2438 ------------------
2439 4 44.4444 libglib-2.0.so.0.3200.4
2440 2 22.2222 libpthread-2.16.so
2441 1 11.1111 connmand
2442 1 11.1111 libc-2.16.so
2443 1 11.1111 librt-2.16.so
2444 6 0.0010 oprofile-server
2445 CPU_CLK_UNHALT...|
2446 samples| %|
2447 ------------------
2448 3 50.0000 libc-2.16.so
2449 1 16.6667 oprofile-server
2450 1 16.6667 libpthread-2.16.so
2451 1 16.6667 libglib-2.0.so.0.3200.4
2452 5 8.6e-04 gconfd-2
2453 CPU_CLK_UNHALT...|
2454 samples| %|
2455 ------------------
2456 2 40.0000 libdbus-1.so.3.7.2
2457 2 40.0000 libglib-2.0.so.0.3200.4
2458 1 20.0000 libc-2.16.so
2459 </literallayout>
2460 The output above shows the breakdown or samples by both
2461 number of samples and percentage for each executable.
2462 Within an executable, the sample counts are broken down
2463 further into executable and shared libraries (DSOs) used
2464 by the executable.
2465 </para>
2466
2467 <para>
2468 To get even more detailed breakdowns by function, we need to
2469 have the full paths to the DSOs, which we can get by
2470 using -f with opreport:
2471 <literallayout class='monospaced'>
2472 root@crownbay:~# opreport -f
2473
2474 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2475 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2476 CPU_CLK_UNHALT...|
2477 samples| %|
2478
2479 464365 79.8156 /boot/vmlinux-3.4.11-yocto-standard
2480 65108 11.1908 /usr/bin/oprofiled
2481 CPU_CLK_UNHALT...|
2482 samples| %|
2483 ------------------
2484 64416 98.9372 /usr/bin/oprofiled
2485 692 1.0628 /lib/libc-2.16.so
2486 36959 6.3526 /no-vmlinux
2487 4378 0.7525 /bin/busybox
2488 CPU_CLK_UNHALT...|
2489 samples| %|
2490 ------------------
2491 2844 64.9612 /lib/libc-2.16.so
2492 1337 30.5391 /bin/busybox
2493 193 4.4084 /lib/ld-2.16.so
2494 2 0.0457 /lib/libnss_compat-2.16.so
2495 1 0.0228 /lib/libnsl-2.16.so
2496 1 0.0228 /lib/libnss_files-2.16.so
2497 4344 0.7467 /bin/bash
2498 CPU_CLK_UNHALT...|
2499 samples| %|
2500 ------------------
2501 2657 61.1648 /bin/bash
2502 1665 38.3287 /lib/libc-2.16.so
2503 18 0.4144 /lib/ld-2.16.so
2504 3 0.0691 /lib/libtinfo.so.5.9
2505 1 0.0230 /lib/libdl-2.16.so
2506 .
2507 .
2508 .
2509 </literallayout>
2510 Using the paths shown in the above output and the -l option to
2511 opreport, we can see all the functions that have hits in the
2512 profile and their sample counts and percentages. Here's a
2513 portion of what we get for the kernel:
2514 <literallayout class='monospaced'>
2515 root@crownbay:~# opreport -l /boot/vmlinux-3.4.11-yocto-standard
2516
2517 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2518 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2519 samples % symbol name
2520 233981 50.3873 intel_idle
2521 15437 3.3243 rb_get_reader_page
2522 14503 3.1232 ring_buffer_consume
2523 14092 3.0347 mutex_spin_on_owner
2524 13024 2.8047 read_hpet
2525 8039 1.7312 sub_preempt_count
2526 7096 1.5281 ioread32
2527 6997 1.5068 add_preempt_count
2528 3985 0.8582 rb_advance_reader
2529 3488 0.7511 add_event_entry
2530 3303 0.7113 get_parent_ip
2531 3104 0.6684 rb_buffer_peek
2532 2960 0.6374 op_cpu_buffer_read_entry
2533 2614 0.5629 sync_buffer
2534 2545 0.5481 debug_smp_processor_id
2535 2456 0.5289 ohci_irq
2536 2397 0.5162 memset
2537 2349 0.5059 __copy_to_user_ll
2538 2185 0.4705 ring_buffer_event_length
2539 1918 0.4130 in_lock_functions
2540 1850 0.3984 __schedule
2541 1767 0.3805 __copy_from_user_ll_nozero
2542 1575 0.3392 rb_event_data_length
2543 1256 0.2705 memcpy
2544 1233 0.2655 system_call
2545 1213 0.2612 menu_select
2546 </literallayout>
2547 Notice that above we see an entry for the __copy_to_user_ll()
2548 function that we've looked at with other profilers as well.
2549 </para>
2550
2551 <para>
2552 Here's what we get when we do the same thing for the
2553 busybox executable:
2554 <literallayout class='monospaced'>
2555 CPU: Intel Architectural Perfmon, speed 1.3e+06 MHz (estimated)
2556 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000
2557 samples % image name symbol name
2558 349 8.4198 busybox retrieve_file_data
2559 308 7.4306 libc-2.16.so _IO_file_xsgetn
2560 283 6.8275 libc-2.16.so __read_nocancel
2561 235 5.6695 libc-2.16.so syscall
2562 233 5.6212 libc-2.16.so clearerr
2563 215 5.1870 libc-2.16.so fread
2564 181 4.3667 libc-2.16.so __write_nocancel
2565 158 3.8118 libc-2.16.so __underflow
2566 151 3.6429 libc-2.16.so _dl_addr
2567 150 3.6188 busybox progress_meter
2568 150 3.6188 libc-2.16.so __poll_nocancel
2569 148 3.5706 libc-2.16.so _IO_file_underflow@@GLIBC_2.1
2570 137 3.3052 busybox safe_poll
2571 125 3.0157 busybox bb_progress_update
2572 122 2.9433 libc-2.16.so __x86.get_pc_thunk.bx
2573 95 2.2919 busybox full_write
2574 81 1.9542 busybox safe_write
2575 77 1.8577 busybox xwrite
2576 72 1.7370 libc-2.16.so _IO_file_read
2577 71 1.7129 libc-2.16.so _IO_sgetn
2578 67 1.6164 libc-2.16.so poll
2579 52 1.2545 libc-2.16.so _IO_switch_to_get_mode
2580 45 1.0856 libc-2.16.so read
2581 34 0.8203 libc-2.16.so write
2582 32 0.7720 busybox monotonic_sec
2583 25 0.6031 libc-2.16.so vfprintf
2584 22 0.5308 busybox get_mono
2585 14 0.3378 ld-2.16.so strcmp
2586 14 0.3378 libc-2.16.so __x86.get_pc_thunk.cx
2587 .
2588 .
2589 .
2590 </literallayout>
2591 Since we recorded the profile with a callchain depth of 6, we
2592 should be able to see our __copy_to_user_ll() callchains in
2593 the output, and indeed we can if we search around a bit in
2594 the 'opreport --callgraph' output:
2595 <literallayout class='monospaced'>
2596 root@crownbay:~# opreport --callgraph /boot/vmlinux-3.4.11-yocto-standard
2597
2598 392 6.9639 vmlinux-3.4.11-yocto-standard sock_aio_read
2599 736 13.0751 vmlinux-3.4.11-yocto-standard __generic_file_aio_write
2600 3255 57.8255 vmlinux-3.4.11-yocto-standard inet_recvmsg
2601 785 0.1690 vmlinux-3.4.11-yocto-standard tcp_recvmsg
2602 1790 31.7940 vmlinux-3.4.11-yocto-standard local_bh_enable
2603 1238 21.9893 vmlinux-3.4.11-yocto-standard __kfree_skb
2604 992 17.6199 vmlinux-3.4.11-yocto-standard lock_sock_nested
2605 785 13.9432 vmlinux-3.4.11-yocto-standard tcp_recvmsg [self]
2606 525 9.3250 vmlinux-3.4.11-yocto-standard release_sock
2607 112 1.9893 vmlinux-3.4.11-yocto-standard tcp_cleanup_rbuf
2608 72 1.2789 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2609
2610 170 0.0366 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2611 1491 73.3038 vmlinux-3.4.11-yocto-standard memcpy_toiovec
2612 327 16.0767 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec
2613 170 8.3579 vmlinux-3.4.11-yocto-standard skb_copy_datagram_iovec [self]
2614 20 0.9833 vmlinux-3.4.11-yocto-standard copy_to_user
2615
2616 2588 98.2909 vmlinux-3.4.11-yocto-standard copy_to_user
2617 2349 0.5059 vmlinux-3.4.11-yocto-standard __copy_to_user_ll
2618 2349 89.2138 vmlinux-3.4.11-yocto-standard __copy_to_user_ll [self]
2619 166 6.3046 vmlinux-3.4.11-yocto-standard do_page_fault
2620 </literallayout>
2621 Remember that by default OProfile sessions are cumulative
2622 i.e. if you start and stop a profiling session, then start a
2623 new one, the new one will not erase the previous run(s) but
2624 will build on it. If you want to restart a profile from scratch,
2625 you need to reset:
2626 <literallayout class='monospaced'>
2627 root@crownbay:~# opcontrol --reset
2628 </literallayout>
2629 </para>
2630 </section>
2631
2632 <section id='oprofileui-a-gui-for-oprofile'>
2633 <title>OProfileUI - A GUI for OProfile</title>
2634
2635 <para>
2636 Yocto also supports a graphical UI for controlling and viewing
2637 OProfile traces, called OProfileUI. To use it, you first need
2638 to clone the oprofileui git repo, then configure, build, and
2639 install it:
2640 <literallayout class='monospaced'>
2641 [trz@empanada tmp]$ git clone git://git.yoctoproject.org/oprofileui
2642 [trz@empanada tmp]$ cd oprofileui
2643 [trz@empanada oprofileui]$ ./autogen.sh
2644 [trz@empanada oprofileui]$ sudo make install
2645 </literallayout>
2646 OprofileUI replaces the 'opreport' functionality with a GUI,
2647 and normally doesn't require the user to use 'opcontrol' either.
2648 If you want to profile the kernel, however, you need to either
2649 use the UI to specify a vmlinux or use 'opcontrol' to specify
2650 it on the target:
2651 </para>
2652
2653 <para>
2654 First, on the target, check if vmlinux file: is set:
2655 <literallayout class='monospaced'>
2656 root@crownbay:~# opcontrol --status
2657 </literallayout>
2658 If not:
2659 <literallayout class='monospaced'>
2660 root@crownbay:~# opcontrol --shutdown
2661 root@crownbay:~# opcontrol --vmlinux=/boot/vmlinux-`uname -r`
2662 root@crownbay:~# opcontrol --start-daemon
2663 </literallayout>
2664 Now, start the oprofile UI on the host system:
2665 <literallayout class='monospaced'>
2666 [trz@empanada oprofileui]$ oprofile-viewer
2667 </literallayout>
2668 To run a profile on the remote system, first connect to the
2669 remote system by pressing the 'Connect' button and supplying
2670 the IP address and port of the remote system (the default
2671 port is 4224).
2672 </para>
2673
2674 <para>
2675 The oprofile server should automatically be started already.
2676 If not, the connection will fail and you either typed in the
2677 wrong IP address and port (see below), or you need to start
2678 the server yourself:
2679 <literallayout class='monospaced'>
2680 root@crownbay:~# oprofile-server
2681 </literallayout>
2682 Or, to specify a specific port:
2683 <literallayout class='monospaced'>
2684 root@crownbay:~# oprofile-server --port 8888
2685 </literallayout>
2686 Once connected, press the 'Start' button and then run the
2687 wget workload on the remote system:
2688 <literallayout class='monospaced'>
2689 root@crownbay:~# rm linux-2.6.19.2.tar.bz2; wget <ulink url='http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2'>http://downloads.yoctoproject.org/mirror/sources/linux-2.6.19.2.tar.bz2</ulink>; sync
2690 Connecting to downloads.yoctoproject.org (140.211.169.59:80)
2691 linux-2.6.19.2.tar.b 100% |*******************************| 41727k 0:00:00 ETA
2692 </literallayout>
2693 Once the workload completes, press the 'Stop' button. At that
2694 point the OProfile viewer will download the profile files it's
2695 collected (this may take some time, especially if the kernel
2696 was profiled). While it downloads the files, you should see
2697 something like the following:
2698 </para>
2699
2700 <para>
2701 <imagedata fileref="figures/oprofileui-downloading.png" width="6in" depth="7in" align="center" scalefit="1" />
2702 </para>
2703
2704 <para>
2705 Once the profile files have been retrieved, you should see a
2706 list of the processes that were profiled:
2707 </para>
2708
2709 <para>
2710 <imagedata fileref="figures/oprofileui-processes.png" width="6in" depth="7in" align="center" scalefit="1" />
2711 </para>
2712
2713 <para>
2714 If you select one of them, you should see all the symbols that
2715 were hit during the profile. Selecting one of them will show a
2716 list of callers and callees of the chosen function in two
2717 panes below the top pane. For example, here's what we see
2718 when we select __copy_to_user_ll():
2719 </para>
2720
2721 <para>
2722 <imagedata fileref="figures/oprofileui-copy-to-user.png" width="6in" depth="7in" align="center" scalefit="1" />
2723 </para>
2724
2725 <para>
2726 As another example, we can look at the busybox process and see
2727 that the progress meter made a system call:
2728 </para>
2729
2730 <para>
2731 <imagedata fileref="figures/oprofileui-busybox.png" width="6in" depth="7in" align="center" scalefit="1" />
2732 </para>
2733
2734 <note>
2735 Tying It Together: oprofile does have build options to enable
2736 use of the perf_event subsystem and benefit from the perf_event
2737 infrastructure by adding support for something other than
2738 system-wide profiling i.e. per-process or workload profiling,
2739 but the version in danny doesn't yet take advantage of
2740 those capabilities.
2741 </note>
2742 </section>
2743
2744 <section id='oprofile-documentation'>
2745 <title>Documentation</title>
2746
2747 <para>
2748 Yocto already has some information on setting up and using
2749 OProfile and oprofileui. As this document doesn't cover
2750 everything in detail, it may be worth taking a look at the
2751 "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-oprofile'>Profiling with OProfile</ulink>"
2752 section in the Yocto Project Development Manual
2753 </para>
2754
2755 <para>
2756 The OProfile manual can be found here:
2757 <ulink url='http://oprofile.sourceforge.net/doc/index.html'>OProfile manual</ulink>
2758 </para>
2759
2760 <para>
2761 The OProfile website contains links to the above manual and
2762 bunch of other items including an extensive set of examples:
2763 <ulink url='http://oprofile.sourceforge.net/about/'>About OProfile</ulink>
2764 </para>
2765 </section>
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775</section>
2224</chapter> 2776</chapter>
2225<!-- 2777<!--
2226vim: expandtab tw=80 ts=4 2778vim: expandtab tw=80 ts=4