Scanning dependencies of target slowness weirdness.

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Scanning dependencies of target slowness weirdness.

Carlo Wood
Hello all.

Apart from that I think that needing 7 seconds is too long
for generating dependencies for source files that can be
read in 1 second (and 1 second to write the result to disk),
I ran into something that I cannot explain :/.

I recently bought a new PC: 8 cores, 32 GB ram -- each core
is faster than the 4 cores machine with 4 GB that I had;
They run the same OS, same linux kernel, same software
versions.

SUMMARY OF RESULTS
------------------

The old 4 core machine (Intel QX6700 @2.6 GHz), generates the
dependencies for this project in

(OLD machine)

real    0m7.601s
user    0m2.488s
sys     0m4.164s

Which is annoying slow, but
the new 8 core machine (AMD FX-8150 @3.6 GHz), generates
the same dependencies for this project in

(NEW machine)

real    0m32.653s
user    0m2.512s
sys     0m10.385s

which is unacceptably slow :/

DETAILS
-------

$ cat /etc/debian_version
OLD: wheezy/sid
NEW: wheezy/sid

$ uname -a
OLD: Linux hikaru 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64 GNU/Linux
NEW: Linux malatos 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64 GNU/Linux

$ ldd /usr/bin/cmake
OLD:
        linux-vdso.so.1 =>  (0x00007fff48372000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fabcd035000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fabcce0b000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fabccbf3000)
        libarchive.so.12 => /usr/lib/x86_64-linux-gnu/libarchive.so.12 (0x00007fabcc957000)
        libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 (0x00007fabcc6f3000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fabcc3eb000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fabcc169000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fabcbf53000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fabcbbcb000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fabcb9af000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fabcd276000)
        libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007fabcb7a6000)
        libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007fabcb5a0000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fabcb37d000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fabcb16d000)
        libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007fabcae0d000)
        libnettle.so.4 => /usr/lib/x86_64-linux-gnu/libnettle.so.4 (0x00007fabcabe6000)
        libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11 (0x00007fabca9b2000)
        libssh2.so.1 => /usr/lib/x86_64-linux-gnu/libssh2.so.1 (0x00007fabca788000)
        liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007fabca579000)
        libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007fabca328000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fabca11f000)
        libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007fabc9ee0000)
        libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007fabc9c20000)
        libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007fabc99a1000)
        librtmp.so.0 => /usr/lib/x86_64-linux-gnu/librtmp.so.0 (0x00007fabc9787000)
        libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fabc9570000)
        libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2 (0x00007fabc9355000)
        libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007fabc9081000)
        libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007fabc8e57000)
        libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007fabc8c53000)
        libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007fabc8a4a000)
        libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007fabc8845000)
        libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007fabc8634000)
        libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007fabc8422000)
        libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fabc821e000)
NEW:
        linux-vdso.so.1 =>  (0x00007fffe5800000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbd2bec0000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fbd2bc90000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbd2ba78000)
        libarchive.so.12 => /usr/lib/x86_64-linux-gnu/libarchive.so.12 (0x00007fbd2b7d8000)
        libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 (0x00007fbd2b570000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbd2b268000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbd2afe0000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbd2adc8000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbd2aa40000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbd2a820000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fbd2c0e8000)
        libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007fbd2a610000)
        libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007fbd2a408000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fbd2a1e0000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fbd29fd0000)
        libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007fbd29c70000)
        libnettle.so.4 => /usr/lib/x86_64-linux-gnu/libnettle.so.4 (0x00007fbd29a48000)
        libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11 (0x00007fbd29810000)
        libssh2.so.1 => /usr/lib/x86_64-linux-gnu/libssh2.so.1 (0x00007fbd295e0000)
        liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007fbd293d0000)
        libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007fbd29178000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fbd28f70000)
        libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007fbd28d30000)
        libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007fbd28a70000)
        libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007fbd287f0000)
        librtmp.so.0 => /usr/lib/x86_64-linux-gnu/librtmp.so.0 (0x00007fbd285d0000)
        libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fbd283b8000)
        libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2 (0x00007fbd28198000)
        libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007fbd27ec0000)
        libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007fbd27c90000)
        libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007fbd27a88000)
        libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007fbd27878000)
        libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007fbd27670000)
        libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007fbd27458000)
        libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007fbd27240000)
        libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fbd27038000)

$ cmake --version
OLD: cmake version 2.8.9-rc1
NEW: cmake version 2.8.9-rc1

The executed commands in this case are:
1) Remove the whole build directory.
2) Reconfigure the project.
3) Run the command that was timed above:

OLD:

time (cd /SSD/singularity/viewer-linux-x86_64-release; /usr/bin/cmake -E cmake_depends
  "Unix Makefiles" /usr/src/secondlife/viewers/singularity/SingularityViewer/linden/indra
  /usr/src/secondlife/viewers/singularity/SingularityViewer/linden/indra/newview
  /SSD/singularity/viewer-linux-x86_64-release
  /SSD/singularity/viewer-linux-x86_64-release/newview
  /SSD/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/DependInfo.cmake
  --color=)

NEW:

time (cd /SSD2/singularity/viewer-linux-x86_64-release; /usr/bin/cmake -E cmake_depends
  "Unix Makefiles" /opt-ntfs/secondlife/viewers/singularity/SingularityViewer/linden/indra
  /opt-ntfs/secondlife/viewers/singularity/SingularityViewer/linden/indra/newview
  /SSD2/singularity/viewer-linux-x86_64-release
  /SSD2/singularity/viewer-linux-x86_64-release/newview
  /SSD2/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/DependInfo.cmake
  --color=)

(all on one line)


The /SSD on the OLD machine is a mount to
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdf1    127G   22G   105G  18% /SSD
where
$ ls -l /dev/disk/by-id | grep sdf1 | grep scsi
lrwxrwxrwx 1 root root 10 Aug 30 20:40 scsi-SATA_Corsair_Perform1101810001000341012E-part1 -> ../../sdf1
which is a 128 GB SSD.
$ sudo hdparm -t -T /dev/sdf1
gives 241.82 MB/sec read speed.

The /SSD2 on the NEW machine is a mount to
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdb3     60G  5.3G    51G  10% /SSD2
where
$ ls -l /dev/disk/by-id | grep sdb3 | grep scsi
lrwxrwxrwx 1 root root 10 Aug 30 22:06 scsi-SATA_OCZ-VERTEX4_OCZ-A58A63H04CI286B9-part3 -> ../../sdb3
which is a better 128 GB SSD.
$ sudo hdparm -t -T /dev/sdb3
gives 433.44 MB/sec read speed.

The write speed a (a lot) faster too.

I ran 'valgrind --tool=callgrind' on both commands, and they look the same.

The output file size is:

NEW:
-rw-r--r-- 1 carlo carlo 22078131 Sep  1 05:08 /SSD2/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/depend.make
OLD:
-rw-r--r-- 1 carlo carlo 21944265 Sep  1 07:07 /SSD/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/depend.make

where the difference is caused by the difference in paths (see above).


So, if there is ANY difference at all, it has to be this you'd think:

On the OLD machine /usr/src is a mount to:
Filesystem   Size  Used  Avail Use% Mounted on
/dev/md7      60G   52G   5.0G  92% /usr/src
where md7 is a RAID 5 of three HDD's: sdb, sdc and sdd, which
are 10,000rpm WD Raptors's of 74.4 GB
and md7 has an ext3 fs.
hdparm -t -T /dev/md7 reports 152.17 MB/sec

On the NEW machine /opt-ntfs is a mount to:
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdd3    301G  2.3G   298G   1% /opt-ntfs
lrwxrwxrwx 1 root root 10 Aug 30 22:03
scsi-SATA_WDC_WD5000AAKX-_WD-WCAYUJX25063-part3 -> ../../sdd3
which is a WD Caviar Blue 500 GB 7200rpm 16MB SATA3
and has a NTFS.
hdparm -t -T /dev/sdd3 reports 117.77 MB/sec

Reading all source files (using cat, and after running hdparm -f on all drives)
and writing them to /dev/null gives:

OLD:  0.133 seconds
NEW:  0.333 seconds

Also note that pre-caching the source files
by reading them all prior to running the cmake command makes
hardly a difference in the timings of the latter.

In other words, this can't have anything to do with reading
the source code from disk.

Any ideas what else I can test?

--
Carlo Wood <[hidden email]>
--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Amine Chadly
Hi,
I am no CMake expert, but I would test on an identical file system types...
I am pretty sure that ntfs and ext3fs don't have the same read-write access latency and this might explain your time difference.
You could additionally launch a strace on the two settings and see if there are major differences that could explain the performance hit...
Good luck.

On Sat, Sep 1, 2012 at 5:22 PM, Carlo Wood <[hidden email]> wrote:
Hello all.

Apart from that I think that needing 7 seconds is too long
for generating dependencies for source files that can be
read in 1 second (and 1 second to write the result to disk),
I ran into something that I cannot explain :/.

I recently bought a new PC: 8 cores, 32 GB ram -- each core
is faster than the 4 cores machine with 4 GB that I had;
They run the same OS, same linux kernel, same software
versions.

SUMMARY OF RESULTS
------------------

The old 4 core machine (Intel QX6700 @2.6 GHz), generates the
dependencies for this project in

(OLD machine)

real    0m7.601s
user    0m2.488s
sys     0m4.164s

Which is annoying slow, but
the new 8 core machine (AMD FX-8150 @3.6 GHz), generates
the same dependencies for this project in

(NEW machine)

real    0m32.653s
user    0m2.512s
sys     0m10.385s

which is unacceptably slow :/

DETAILS
-------

$ cat /etc/debian_version
OLD: wheezy/sid
NEW: wheezy/sid

$ uname -a
OLD: Linux hikaru 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64 GNU/Linux
NEW: Linux malatos 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64 GNU/Linux

$ ldd /usr/bin/cmake
OLD:
        linux-vdso.so.1 =>  (0x00007fff48372000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fabcd035000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fabcce0b000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fabccbf3000)
        libarchive.so.12 => /usr/lib/x86_64-linux-gnu/libarchive.so.12 (0x00007fabcc957000)
        libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 (0x00007fabcc6f3000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fabcc3eb000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fabcc169000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fabcbf53000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fabcbbcb000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fabcb9af000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fabcd276000)
        libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007fabcb7a6000)
        libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007fabcb5a0000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fabcb37d000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fabcb16d000)
        libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007fabcae0d000)
        libnettle.so.4 => /usr/lib/x86_64-linux-gnu/libnettle.so.4 (0x00007fabcabe6000)
        libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11 (0x00007fabca9b2000)
        libssh2.so.1 => /usr/lib/x86_64-linux-gnu/libssh2.so.1 (0x00007fabca788000)
        liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007fabca579000)
        libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007fabca328000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fabca11f000)
        libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007fabc9ee0000)
        libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007fabc9c20000)
        libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007fabc99a1000)
        librtmp.so.0 => /usr/lib/x86_64-linux-gnu/librtmp.so.0 (0x00007fabc9787000)
        libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fabc9570000)
        libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2 (0x00007fabc9355000)
        libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007fabc9081000)
        libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007fabc8e57000)
        libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007fabc8c53000)
        libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007fabc8a4a000)
        libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007fabc8845000)
        libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007fabc8634000)
        libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007fabc8422000)
        libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fabc821e000)
NEW:
        linux-vdso.so.1 =>  (0x00007fffe5800000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbd2bec0000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fbd2bc90000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbd2ba78000)
        libarchive.so.12 => /usr/lib/x86_64-linux-gnu/libarchive.so.12 (0x00007fbd2b7d8000)
        libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 (0x00007fbd2b570000)
        libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fbd2b268000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fbd2afe0000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fbd2adc8000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbd2aa40000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fbd2a820000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fbd2c0e8000)
        libacl.so.1 => /lib/x86_64-linux-gnu/libacl.so.1 (0x00007fbd2a610000)
        libattr.so.1 => /lib/x86_64-linux-gnu/libattr.so.1 (0x00007fbd2a408000)
        liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007fbd2a1e0000)
        libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007fbd29fd0000)
        libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007fbd29c70000)
        libnettle.so.4 => /usr/lib/x86_64-linux-gnu/libnettle.so.4 (0x00007fbd29a48000)
        libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11 (0x00007fbd29810000)
        libssh2.so.1 => /usr/lib/x86_64-linux-gnu/libssh2.so.1 (0x00007fbd295e0000)
        liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2 (0x00007fbd293d0000)
        libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007fbd29178000)
        librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fbd28f70000)
        libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007fbd28d30000)
        libgnutls.so.26 => /usr/lib/x86_64-linux-gnu/libgnutls.so.26 (0x00007fbd28a70000)
        libgcrypt.so.11 => /lib/x86_64-linux-gnu/libgcrypt.so.11 (0x00007fbd287f0000)
        librtmp.so.0 => /usr/lib/x86_64-linux-gnu/librtmp.so.0 (0x00007fbd285d0000)
        libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fbd283b8000)
        libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2 (0x00007fbd28198000)
        libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007fbd27ec0000)
        libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007fbd27c90000)
        libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007fbd27a88000)
        libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007fbd27878000)
        libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007fbd27670000)
        libtasn1.so.3 => /usr/lib/x86_64-linux-gnu/libtasn1.so.3 (0x00007fbd27458000)
        libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007fbd27240000)
        libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007fbd27038000)

$ cmake --version
OLD: cmake version 2.8.9-rc1
NEW: cmake version 2.8.9-rc1

The executed commands in this case are:
1) Remove the whole build directory.
2) Reconfigure the project.
3) Run the command that was timed above:

OLD:

time (cd /SSD/singularity/viewer-linux-x86_64-release; /usr/bin/cmake -E cmake_depends
  "Unix Makefiles" /usr/src/secondlife/viewers/singularity/SingularityViewer/linden/indra
  /usr/src/secondlife/viewers/singularity/SingularityViewer/linden/indra/newview
  /SSD/singularity/viewer-linux-x86_64-release
  /SSD/singularity/viewer-linux-x86_64-release/newview
  /SSD/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/DependInfo.cmake
  --color=)

NEW:

time (cd /SSD2/singularity/viewer-linux-x86_64-release; /usr/bin/cmake -E cmake_depends
  "Unix Makefiles" /opt-ntfs/secondlife/viewers/singularity/SingularityViewer/linden/indra
  /opt-ntfs/secondlife/viewers/singularity/SingularityViewer/linden/indra/newview
  /SSD2/singularity/viewer-linux-x86_64-release
  /SSD2/singularity/viewer-linux-x86_64-release/newview
  /SSD2/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/DependInfo.cmake
  --color=)

(all on one line)


The /SSD on the OLD machine is a mount to
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdf1    127G   22G   105G  18% /SSD
where
$ ls -l /dev/disk/by-id | grep sdf1 | grep scsi
lrwxrwxrwx 1 root root 10 Aug 30 20:40 scsi-SATA_Corsair_Perform1101810001000341012E-part1 -> ../../sdf1
which is a 128 GB SSD.
$ sudo hdparm -t -T /dev/sdf1
gives 241.82 MB/sec read speed.

The /SSD2 on the NEW machine is a mount to
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdb3     60G  5.3G    51G  10% /SSD2
where
$ ls -l /dev/disk/by-id | grep sdb3 | grep scsi
lrwxrwxrwx 1 root root 10 Aug 30 22:06 scsi-SATA_OCZ-VERTEX4_OCZ-A58A63H04CI286B9-part3 -> ../../sdb3
which is a better 128 GB SSD.
$ sudo hdparm -t -T /dev/sdb3
gives 433.44 MB/sec read speed.

The write speed a (a lot) faster too.

I ran 'valgrind --tool=callgrind' on both commands, and they look the same.

The output file size is:

NEW:
-rw-r--r-- 1 carlo carlo 22078131 Sep  1 05:08 /SSD2/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/depend.make
OLD:
-rw-r--r-- 1 carlo carlo 21944265 Sep  1 07:07 /SSD/singularity/viewer-linux-x86_64-release/newview/CMakeFiles/secondlife-bin.dir/depend.make

where the difference is caused by the difference in paths (see above).


So, if there is ANY difference at all, it has to be this you'd think:

On the OLD machine /usr/src is a mount to:
Filesystem   Size  Used  Avail Use% Mounted on
/dev/md7      60G   52G   5.0G  92% /usr/src
where md7 is a RAID 5 of three HDD's: sdb, sdc and sdd, which
are 10,000rpm WD Raptors's of 74.4 GB
and md7 has an ext3 fs.
hdparm -t -T /dev/md7 reports 152.17 MB/sec

On the NEW machine /opt-ntfs is a mount to:
Filesystem   Size  Used  Avail Use% Mounted on
/dev/sdd3    301G  2.3G   298G   1% /opt-ntfs
lrwxrwxrwx 1 root root 10 Aug 30 22:03
scsi-SATA_WDC_WD5000AAKX-_WD-WCAYUJX25063-part3 -> ../../sdd3
which is a WD Caviar Blue 500 GB 7200rpm 16MB SATA3
and has a NTFS.
hdparm -t -T /dev/sdd3 reports 117.77 MB/sec

Reading all source files (using cat, and after running hdparm -f on all drives)
and writing them to /dev/null gives:

OLD:  0.133 seconds
NEW:  0.333 seconds

Also note that pre-caching the source files
by reading them all prior to running the cmake command makes
hardly a difference in the timings of the latter.

In other words, this can't have anything to do with reading
the source code from disk.

Any ideas what else I can test?

--
Carlo Wood <[hidden email]>
--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake



--
  Amine

--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Bill Hoffman
On 9/6/2012 5:56 AM, Amine Chadly wrote:
> Hi,
> I am no CMake expert, but I would test on an identical file system types...
> I am pretty sure that ntfs and ext3fs don't have the same read-write
> access latency and this might explain your time difference.
> You could additionally launch a strace on the two settings and see if
> there are major differences that could explain the performance hit...
> Good luck.
>
and has a NTFS.

It is almost certainly the NTFS.  What if you move the source to the SSD?

-Bill

--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Carlo Wood
Bah, this whole mailinglist disappeared to some file that I wasn't
aware of :p  But I found it back now...
 
I moved the sources to a slow (7200 rpm) HDD using ext4, and it
became 5 times faster.
 
Conclusion: NTFS sucks totally.

On Thu, Sep 06, 2012 at 11:47:36AM -0400, Bill Hoffman wrote:

> On 9/6/2012 5:56 AM, Amine Chadly wrote:
> >Hi,
> >I am no CMake expert, but I would test on an identical file system types...
> >I am pretty sure that ntfs and ext3fs don't have the same read-write
> >access latency and this might explain your time difference.
> >You could additionally launch a strace on the two settings and see if
> >there are major differences that could explain the performance hit...
> >Good luck.
> >
> and has a NTFS.
>
> It is almost certainly the NTFS.  What if you move the source to the SSD?
>
> -Bill

--
Carlo Wood <[hidden email]>
--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Yuri Timenkov
Or it's implementation on Linux... I doubt anybody had any problems with NTFS on Windows :)

You may also be using it via FUSE... not fastest approach.

If you need to keep your sources on NTFS you may create temporary location on native FS and rsync before building (or DVCS). We used it instead of NFS/SAMBA for quick cross-platform builds. Much faster.

On Mon, Sep 10, 2012 at 7:10 PM, Carlo Wood <[hidden email]> wrote:
Bah, this whole mailinglist disappeared to some file that I wasn't
aware of :p  But I found it back now...

I moved the sources to a slow (7200 rpm) HDD using ext4, and it
became 5 times faster.

Conclusion: NTFS sucks totally.

On Thu, Sep 06, 2012 at 11:47:36AM -0400, Bill Hoffman wrote:
> On 9/6/2012 5:56 AM, Amine Chadly wrote:
> >Hi,
> >I am no CMake expert, but I would test on an identical file system types...
> >I am pretty sure that ntfs and ext3fs don't have the same read-write
> >access latency and this might explain your time difference.
> >You could additionally launch a strace on the two settings and see if
> >there are major differences that could explain the performance hit...
> >Good luck.
> >
> and has a NTFS.
>
> It is almost certainly the NTFS.  What if you move the source to the SSD?
>
> -Bill

--
Carlo Wood <[hidden email]>
--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake


--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Bill Hoffman
On 9/10/2012 12:12 PM, Yuri Timenkov wrote:
> Or it's implementation on Linux... I doubt anybody had any problems with
> NTFS on Windows :)

No, NTFS sucks on any OS.  It is way slower with file stats and file
access.

-Bill

--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Michael Jackson
Thought it might be interesting to throw some other numbers at this. Everything is running on OS X 10.6.8 32GB RAM/SSD Boot Volume:

7200RPM Drive using Paragon Systems NTFS Version 10
43.4 secs NTFS
7200RPM Drive using Native OS X HFS+ drivers.
12.6 secs HFS+

Both of the above are on the physically same hard drive, but 2 different partitions.

10.4 secs HFS+ SSD (OWC 3G Mercury - 2010 Vintage)
9.9 secs HFS+  SSD (Corsair Force 3 - 2012 Vintage)

Both the above were HFS+ Journaled on a 3G connection (although the Corsair is 6G capable)

The times were to configure one of my projects (dream3D.bluequartz.net)

Paragon says they are the fastest NTFS on OS X implementation.

Just adding some points to the conversation.
--
Mike Jackson <www.bluequartz.net>

On Sep 10, 2012, at 12:39 PM, Bill Hoffman wrote:

> On 9/10/2012 12:12 PM, Yuri Timenkov wrote:
>> Or it's implementation on Linux... I doubt anybody had any problems with
>> NTFS on Windows :)
>
> No, NTFS sucks on any OS.  It is way slower with file stats and file access.
>
> -Bill
>
> --
>
> Powered by www.kitware.com
>
> Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html
>
> Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ
>
> Follow this link to subscribe/unsubscribe:
> http://www.cmake.org/mailman/listinfo/cmake

--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake
Reply | Threaded
Open this post in threaded view
|

Re: Scanning dependencies of target slowness weirdness.

Andreas Pakulat-2
In reply to this post by Yuri Timenkov
Hi,

On Mon, Sep 10, 2012 at 6:12 PM, Yuri Timenkov <[hidden email]> wrote:
> Or it's implementation on Linux... I doubt anybody had any problems with
> NTFS on Windows :)

Actually people do have problems with that, but since there's
basically no competitor you don't have a choice. And Microsoft has no
reason to improve the situation (well lets hope Windows 8's new FS
will bring some improvement). Windows' filesystems just suck when it
comes to dealing with many smallish files (i.e. anything you can read
into the hard disks cache these days).

Andreas
--

Powered by www.kitware.com

Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html

Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ

Follow this link to subscribe/unsubscribe:
http://www.cmake.org/mailman/listinfo/cmake