16 captures
10 Sep 2009 - 03 Mar 2013
Aug SEP Oct
27
2010 2011 2012
success
fail
About this capture
COLLECTED BY
Organization: Internet Archive
The Internet Archive discovers and captures web pages through many different web crawls. At any given time several distinct crawls are running, some for months, and some every day or longer. View the web archive through the Wayback Machine.

Web wide crawl with initial seedlist and crawler configuration from March 2011. This uses the new HQ software for distributed crawling by Kenji Nagahashi.

What's in the data set:

Crawl start date: 09 March, 2011
Crawl end date: 23 December, 2011
Number of captures: 2,713,676,341
Number of unique URLs: 2,273,840,159
Number of hosts: 29,032,069

The seed list for this crawl was a list of Alexa's top 1 million web sites, retrieved close to the crawl start date. We used Heritrix (3.1.1-SNAPSHOT) crawler software and respected robots.txt directives. The scope of the crawl was not limited except for a few manually excluded sites.

However this was a somewhat experimental crawl for us, as we were using newly minted software to feed URLs to the crawlers, and we know there were some operational issues with it. For example, in many cases we may not have crawled all of the embedded and linked objects in a page since the URLs for these resources were added into queues that quickly grew bigger than the intended size of the crawl (and therefore we never got to them). We also included repeated crawls of some Argentinian government sites, so looking at results by country will be somewhat skewed.

We have made many changes to how we do these wide crawls since this particular example, but we wanted to make the data available "warts and all" for people to experiment with. We have also done some further analysis of the content.

If you would like access to this set of crawl data, please contact us at info at archive dot org and let us know who you are and what you're hoping to do with it. We may not be able to say "yes" to all requests, since we're just figuring out whether this is a good idea, but everyone will be considered.

TIMESTAMPS
The Wayback Machine - https://web.archive.org/web/20110927150856/http://hub.opensolaris.org/bin/view/Project+ppc%2Ddev/ctf
Project Task Map >> CTF - Compressed Text Format
en

CTF - Compressed Text Format

CTF - Compressed Text Format

There are 3 basic CTF related tools. ctfmerge, ctfconvert and ctfstabs. Source is found under usr/src/tools/ctf. The PowerPC community initial build environment had empty ctf binaries that just allowed the make of uts to complete. Note that ctfmerge, ctfstabs and ctfconvert are located in /opt/onbld/bin/ppc on your build machine. We are now at the point of building the tools successfully but things as you can guess are just not that easy.

The ctf tools are shell scripts at the moment which either run the actual ctf tool or exits doing essentially nothing. CTF_TEST is an environmental variable that needs to be set if you want to use the actual bins. CTF_TEST=1, export CTF_TEST in a bash shell.

Status

Brian has had to revert to the 2.6 way of building genassym.c and assym.h, basically the defines are called out in gennassym.c not built from the CTF files. From this genassym.c that is hand assembled, we use the GCC cross compiler to generate a \*.s file for the correct PPC aligned objects (data offsets). Then we take the \*.s file and use grep to fix up the slight differences and the revised \*.s file is linked using the x86 linker. Take a look at the Makefile in /uts/chrp/genassym.

If you want to add structure offsets edit uts/chrp/ml/structs.c, do not edit the structs.h file. If you want to add general defines put them in uts/chrp/ml/genassym.c, it's fairly straighforward. Now here is where we are for the development aspect. Note there are 2 ctf scripts. The default is to use to


/opt/onbld/bin/ppc/ctfstabs.1
/opt/onbld/bin/ppc/ctfconvert.1
/opt/onbld/bin/ppc/ctfconvert
/opt/onbld/bin/ppc/ctfstabs
/opt/onbld/bin/ppc/ctfmerge.1
/opt/onbld/bin/ppc/ctfmerge
/opt/onbld/bin/ppc/ctfconvert.ksh

The ksh script has been created to replace original PowerPC community (aka polaris) /opt/onbld/bin/ppc/ctfconvert. It is used to allow development to test CTF utils w/o conflicting with the build environment on a host system. When you build CTF from it's sources ctfconvert is renamed ctfconvert.1 and copied into /opt/onbld/bin/ppc/ we set an external environmental variable CTF_TEST=1 to run actual bin and then test it


if [ "$CTF_TEST" = 1 ]
then
/opt/onbld/bin/ppc/ctfconvert.1 $*
exit 0
else
echo "ctfconvert is a non-operational dummy script!" 1>&2
exit 0
fi

Current History

We are currently stuck with the fact that ctfstabs / libctf has been written with only a native host and target in mind. However we need a cross development environment for the long term. Why you may ask... because the true embedded development requires them. Take a look at Linux on PPC, cross development is the only way. All of the work we do here on a cross development system is not throw away either.

So the cross development of libctf is somewhat stalled on the libc availability and the fact that the elf file processing is was coded without leveraging general elf routines to deal with the endian and byte swapping issue. It appears we will have to recode using the std elf library calls in libelf. We have shelved this for the moment to focus on other items, but this will soon come to the forefront once again.

  1. modified cft/stabs/Makefile to include a needed path for an alternate libctf.so.1. We need to point ctfstabs to our cross dev libctf, not the native x86 one in /opt/lib. It now appears we will have to implement a more general elf read / write format and expect not just native data files in ctfstabs. This would be a more common approach if cross development had been a consideration in order to manipulate the data.
  2. had to build libctf.so.1 in usr/src/lib/libctf and had to set CC=/opt/SUNWspro/bin/cc because Makefile needs a bit of work to support cross dev. Building libctf needs libc.so.1. For the genassym utility, when building for PPC on x86, you use the libc for x86. You get that when you use the CC setting above. If using the pulsar gcc instead of the Sun compiler you will need the PPC based libc which is not completed yet. Your lib will be at usr/src/lib/libctf/ppc/libctf.so.1. You will need to copy it over to /opt/onbld/lib/ppc/lib

Genassym and CTF

John Levon's Blog entry gives a nice operational overview of how the ctf utils interact with genoffsets and genassym to produce the output on a native machine.

CTF Overview

CTF (Compact C Type Format) encapsulates a reduced form of debugging information similar to DWARF and the venerable stabs. It describes types (structures, unions, typedefs etc.) and function prototypes, and is carefully designed to take a minimum of space in the ELF binaries. The kernel binaries that Sun ship have this data embedded as an ELF section (.SUNW_ctf) so that tools like mdb and dtrace can understand types. Of course, it would have been possible to use existing formats such as DWARF, but they typically have a large space overhead and are more difficult to process.

The CTF data is built from the existing stabs/DWARF data generated by the compiler's -g option, and replaces this existing debugging information in the output binary (ctfconvert performs this job).

/*

  • CTF - Compact ANSI-C Type Format
    *
  • This file format can be used to compactly represent the information needed
  • by a debugger to interpret the ANSI-C types used by a given program.
  • Traditionally, this kind of information is generated by the compiler when
  • invoked with the -g flag and is stored in "stabs" strings or in the more
  • modern DWARF format. CTF provides a representation of only the information
  • that is relevant to debugging a complex, optimized C program such as the
  • operating system kernel in a form that is significantly more compact than
  • the equivalent stabs or DWARF representation. The format is data-model
  • independent, so consumers do not need different code depending on whether
  • they are 32-bit or 64-bit programs. CTF assumes that a standard ELF symbol
  • table is available for use in the debugger, and uses the structure and data
  • of the symbol table to avoid storing redundant information. The CTF data
  • may be compressed on disk or in memory, indicated by a bit in the header.
  • CTF may be interpreted in a raw disk file, or it may be stored in an ELF
  • section, typically named .SUNW_ctf. Data structures are aligned so that
  • a raw CTF file or CTF ELF section may be manipulated using mmap(2).
    *
  • The CTF file or section itself has the following structure:
    *
  • +--+--+--+--+--+--+
  • | file | type | data | function | data | string |
  • | header | labels | objects | info | types | table |
  • +--+--+--+--+--+--+
    *
  • The file header stores a magic number and version information, encoding
  • flags, and the byte offset of each of the sections relative to the end of the
  • header itself. If the CTF data has been uniquified against another set of
  • CTF data, a reference to that data also appears in the the header. This
  • reference is the name of the label corresponding to the types uniquified
  • against.
    *
  • Following the header is a list of labels, used to group the types included in
  • the data types section. Each label is accompanied by a type ID i. A given
  • label refers to the group of types whose IDs are in the range [0, i].
    *
  • Data object and function records are stored in the same order as they appear
  • in the corresponding symbol table, except that symbols marked SHN_UNDEF are
  • not stored and symbols that have no type data are padded out with zeroes.
  • For each data object, the type ID (a small integer) is recorded. For each
  • function, the type ID of the return type and argument types is recorded.
    *
  • The data types section is a list of variable size records that represent each
  • type, in order by their ID. The types themselves form a directed graph,
  • where each node may contain one or more outgoing edges to other type nodes,
  • denoted by their ID.
    *
  • Strings are recorded as a string table ID (0 or 1) and a byte offset into the
  • string table. String table 0 is the internal CTF string table. String table
  • 1 is the external string table, which is the string table associated with the
  • ELF symbol table for this object. CTF does not record any strings that are
  • already in the symbol table, and the CTF string table does not contain any
  • duplicated strings.
    *
  • If the CTF data has been merged with another parent CTF object, some outgoing
  • edges may refer to type nodes that exist in another CTF object. The debugger
  • and libctf library are responsible for connecting the appropriate objects
  • together so that the full set of types can be explored and manipulated.
    */

Another overview is the following

usr/src/tools/ctf/cvt/ctfmerge.c

Tags:
Created by admin on 2009/10/26 12:17
Last modified by admin on 2009/10/26 12:17

XWiki Enterprise 2.7.1.34853 - Documentation