Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
* [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
@ 2012-05-18 22:41 Pierre Muller
  2012-05-25  8:09 ` PING " Pierre Muller
                   ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Pierre Muller @ 2012-05-18 22:41 UTC (permalink / raw)
  To: gdb-patches

  Here is a RFA for inclusion of scripts to gdb/contrib/ari.

  The only changes to RFC-v2 are:
1) directory moved from gdb/ari to gdb/contrib./ari
2) create-web-ari-in-src.sh adapted to new directory
3) This script now output that location of the generated
web page (with a different message depending on
the existence of this file).



Pierre Muller
GDB pascal language maintainer


2012-05-19  Pierre Muller  <muller@ics.u-strasbg.fr>

	* contrib/ari/create-web-ari-in-src.sh: New file.
	* contrib/ari/gdb_ari.sh: New file.
	* contrib/ari/gdb_find.sh: New file.
	* contrib/ari/update-web-ari.sh: New file.

Index: contrib/ari/create-web-ari-in-src.sh
===================================================================
RCS file: contrib/ari/create-web-ari-in-src.sh
diff -N contrib/ari/create-web-ari-in-src.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/create-web-ari-in-src.sh	18 May 2012 22:31:42 -0000
@@ -0,0 +1,68 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -x
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+  webdir=~/htdocs/www/local/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
Index: contrib/ari/gdb_ari.sh
===================================================================
RCS file: contrib/ari/gdb_ari.sh
diff -N contrib/ari/gdb_ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_ari.sh	18 May 2012 22:31:42 -0000
@@ -0,0 +1,1347 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext
deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+awk -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement,
suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc
for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing
category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME],
bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug],
supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug],
supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug,
"Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ",
only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\*
ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux
system'\'';\
+ comments should clearly differentiate between the two (this test assumes
that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or
`kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p =
0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:spa
ce:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next;
}
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/
{
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function
implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly
\
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror
and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param
\
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|lega
cy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name =
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
]]*)(.*)$/, "\\2", 1, line)
+	line =
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&&
!/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name =
gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2",
1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache
parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$
)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR "
\"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
Index: contrib/ari/gdb_find.sh
===================================================================
RCS file: contrib/ari/gdb_find.sh
diff -N contrib/ari/gdb_find.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_find.sh	18 May 2012 22:31:42 -0000
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
Index: contrib/ari/update-web-ari.sh
===================================================================
RCS file: contrib/ari/update-web-ari.sh
diff -N contrib/ari/update-web-ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/update-web-ari.sh	18 May 2012 22:31:43 -0000
@@ -0,0 +1,947 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if [ ! -z "$send_email" ]
+then
+  send_email=false
+fi
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while
read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s -
${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU
indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    cp -f ${bugf} ${oldf}
+    cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx
--src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+    if [ "$send_email" == "true" ]; then
+      if [ "${node}" = "sourceware.org" ]; then
+	warning_email=gdb-patches@sourceware.org
+      else
+        # Use default email
+	warning_email=${USER}@${node}
+      fi
+
+      # Check if ${diff} is not empty
+      if [ -s ${diff} ]; then
+	# Send an email $warning_email
+	mutt -s "New ARI warning ${email_suffix}" \
+	    ${warning_email} < ${wwwdir}/mail-message
+      else
+        if [ -s ${wwwdir}/${mail-message} ]; then
+	  # Send an email to $warning_email
+	  mutt -s "ARI warning list change ${email_suffix}" \
+	    ${warning_email} < ${wwwdir}/mail-message
+        fi
+      fi
+    fi
+fi
+
+
+
+
+if ${check_doschk_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print
"\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";
category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";
category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";
category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK >
${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next;
}
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/))
{
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not
being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can
detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >>
ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th
align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right
valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`awk '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    print bugs / ( oks + legacy + deprecated )
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    cat ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0


^ permalink raw reply	[flat|nested] 32+ messages in thread

* PING [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-18 22:41 [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory Pierre Muller
@ 2012-05-25  8:09 ` Pierre Muller
  2012-05-25 19:47 ` Jan Kratochvil
  2012-05-26  0:12 ` [RFA] " Sergio Durigan Junior
  2 siblings, 0 replies; 32+ messages in thread
From: Pierre Muller @ 2012-05-25  8:09 UTC (permalink / raw)
  To: gdb-patches

  Nobody reacted to that RFA...
Should I change something more before including
ARI scripts into gdb/contrib. 

  I would really like to start working on it,
but feel like I am stalled...


  Pierre


> -----Message d'origine-----
> De : gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Pierre Muller
> Envoyé : samedi 19 mai 2012 00:40
> À : gdb-patches@sourceware.org
> Objet : [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari
> directory
> 
>   Here is a RFA for inclusion of scripts to gdb/contrib/ari.
> 
>   The only changes to RFC-v2 are:
> 1) directory moved from gdb/ari to gdb/contrib./ari
> 2) create-web-ari-in-src.sh adapted to new directory
> 3) This script now output that location of the generated
> web page (with a different message depending on
> the existence of this file).
> 
> 
> 
> Pierre Muller
> GDB pascal language maintainer
> 
> 
> 2012-05-19  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
> 	* contrib/ari/create-web-ari-in-src.sh: New file.
> 	* contrib/ari/gdb_ari.sh: New file.
> 	* contrib/ari/gdb_find.sh: New file.
> 	* contrib/ari/update-web-ari.sh: New file.
> 
> Index: contrib/ari/create-web-ari-in-src.sh
> ===================================================================
> RCS file: contrib/ari/create-web-ari-in-src.sh
> diff -N contrib/ari/create-web-ari-in-src.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/create-web-ari-in-src.sh	18 May 2012 22:31:42 -0000
> @@ -0,0 +1,68 @@
> +#! /bin/sh
> +
> +# GDB script to create web ARI page directly from within gdb/ari
directory.
> +#
> +# Copyright (C) 2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +set -x
> +
> +# Determine directory of current script.
> +scriptpath=`dirname $0`
> +# If "scriptpath" is a relative path, then convert it to absolute.
> +if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
> +    scriptpath="`pwd`/${scriptpath}"
> +fi
> +
> +# update-web-ari.sh script wants four parameters
> +# 1: directory of checkout src or gdb-RELEASE for release sources.
> +# 2: a temp directory.
> +# 3: a directory for generated web page.
> +# 4: The name of the current package, must be gdb here.
> +# Here we provide default values for these 4 parameters
> +
> +# srcdir parameter
> +if [ -z "${srcdir}" ] ; then
> +  srcdir=${scriptpath}/../../..
> +fi
> +
> +# Determine location of a temporary directory to be used by
> +# update-web-ari.sh script.
> +if [ -z "${tempdir}" ] ; then
> +  if [ ! -z "$TMP" ] ; then
> +    tempdir=$TMP/create-ari
> +  elif [ ! -z "$TEMP" ] ; then
> +    tempdir=$TEMP/create-ari
> +  else
> +    tempdir=/tmp/create-ari
> +  fi
> +fi
> +
> +# Default location of generate index.hmtl web page.
> +if [ -z "${webdir}" ] ; then
> +  webdir=~/htdocs/www/local/ari
> +fi
> +
> +# Launch update-web-ari.sh in same directory as current script.
> +${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
> +
> +if [ -f "${webdir}/index.html" ] ; then
> +  echo "ARI output can be viewed in file \"${webdir}/index.html\""
> +else
> +  echo "ARI script failed to generate file \"${webdir}/index.html\""
> +fi
> +
> Index: contrib/ari/gdb_ari.sh
> ===================================================================
> RCS file: contrib/ari/gdb_ari.sh
> diff -N contrib/ari/gdb_ari.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/gdb_ari.sh	18 May 2012 22:31:42 -0000
> @@ -0,0 +1,1347 @@
> +#!/bin/sh
> +
> +# GDB script to list of problems using awk.
> +#
> +# Copyright (C) 2002-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# Make certain that the script is not running in an internationalized
> +# environment.
> +
> +LANG=c ; export LANG
> +LC_ALL=c ; export LC_ALL
> +
> +# Permanent checks take the form:
> +
> +#     Do not use XXXX, ISO C 90 implies YYYY
> +#     Do not use XXXX, instead use YYYY''.
> +
> +# and should never be removed.
> +
> +# Temporary checks take the form:
> +
> +#     Replace XXXX with YYYY
> +
> +# and once they reach zero, can be eliminated.
> +
> +# FIXME: It should be able to override this on the command line
> +error="regression"
> +warning="regression"
> +ari="regression eol code comment deprecated legacy obsolete gettext"
> +all="regression eol code comment deprecated legacy obsolete gettext
> deprecate internal gdbarch macro"
> +print_doc=0
> +print_idx=0
> +
> +usage ()
> +{
> +    cat <<EOF 1>&2
> +Error: $1
> +
> +Usage:
> +    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
> +Options:
> +  --print-doc    Print a list of all potential problems, then exit.
> +  --print-idx    Include the problems IDX (index or key) in every
message.
> +  --src=file     Write source lines to file.
> +  -Werror        Treat all problems as errors.
> +  -Wall          Report all problems.
> +  -Wari          Report problems that should be fixed in new code.
> +  -W<category>   Report problems in the specifed category.  Vaid
categories
> +                 are: ${all}
> +EOF
> +    exit 1
> +}
> +
> +
> +# Parse the various options
> +Woptions=
> +srclines=""
> +while test $# -gt 0
> +do
> +    case "$1" in
> +    -Wall ) Woptions="${all}" ;;
> +    -Wari ) Woptions="${ari}" ;;
> +    -Werror ) Werror=1 ;;
> +    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
> +    --print-doc ) print_doc=1 ;;
> +    --print-idx ) print_idx=1 ;;
> +    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
> +    -- ) shift ; break ;;
> +    - ) break ;;
> +    -* ) usage "$1: unknown option" ;;
> +    * ) break ;;
> +    esac
> +    shift
> +done
> +if test -n "$Woptions" ; then
> +    warning="$Woptions"
> +    error=
> +fi
> +
> +
> +# -Werror implies treating all warnings as errors.
> +if test -n "${Werror}" ; then
> +    error="${error} ${warning}"
> +fi
> +
> +
> +# Validate all errors and warnings.
> +for w in ${warning} ${error}
> +do
> +    case " ${all} " in
> +    *" ${w} "* ) ;;
> +    * ) usage "Unknown option -W${w}" ;;
> +    esac
> +done
> +
> +
> +# make certain that there is at least one file.
> +if test $# -eq 0 -a ${print_doc} = 0
> +then
> +    usage "Missing file."
> +fi
> +
> +
> +# Convert the errors/warnings into corresponding array entries.
> +for a in ${all}
> +do
> +    aris="${aris} ari_${a} = \"${a}\";"
> +done
> +for w in ${warning}
> +do
> +    warnings="${warnings} warning[ari_${w}] = 1;"
> +done
> +for e in ${error}
> +do
> +    errors="${errors} error[ari_${e}]  = 1;"
> +done
> +
> +awk -- '
> +BEGIN {
> +    # NOTE, for a per-file begin use "FNR == 1".
> +    '"${aris}"'
> +    '"${errors}"'
> +    '"${warnings}"'
> +    '"${srclines}"'
> +    print_doc =  '$print_doc'
> +    print_idx =  '$print_idx'
> +    PWD = "'`pwd`'"
> +}
> +
> +# Print the error message for BUG.  Append SUPLEMENT if non-empty.
> +function print_bug(file,line,prefix,category,bug,doc,supplement,
> suffix,idx) {
> +    if (print_idx) {
> +	idx = bug ": "
> +    } else {
> +	idx = ""
> +    }
> +    if (supplement) {
> +	suffix = " (" supplement ")"
> +    } else {
> +	suffix = ""
> +    }
> +    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +    print file ":" line ": " prefix category ": " idx doc suffix
> +    if (srclines != "") {
> +	print file ":" line ":" $0 >> srclines
> +    }
> +}
> +
> +function fix(bug,file,count) {
> +    skip[bug, file] = count
> +    skipped[bug, file] = 0
> +}
> +
> +function fail(bug,supplement) {
> +    if (doc[bug] == "") {
> +	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc
> for bug " bug)
> +	exit
> +    }
> +    if (category[bug] == "") {
> +	print_bug("", 0, "internal: ", "internal", "internal", "Missing
> category for bug " bug)
> +	exit
> +    }
> +
> +    if (ARI_OK == bug) {
> +	return
> +    }
> +    # Trim the filename down to just DIRECTORY/FILE so that it can be
> +    # robustly used by the FIX code.
> +
> +    if (FILENAME ~ /^\//) {
> +	canonicalname = FILENAME
> +    } else {
> +        canonicalname = PWD "/" FILENAME
> +    }
> +    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1,
canonicalname)
> +
> +    skipped[bug, shortname]++
> +    if (skip[bug, shortname] >= skipped[bug, shortname]) {
> +	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME],
> bug
> +	# Do nothing
> +    } else if (error[category[bug]]) {
> +	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug],
> supplement)
> +    } else if (warning[category[bug]]) {
> +	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug],
> supplement)
> +    }
> +}
> +
> +FNR == 1 {
> +    seen[FILENAME] = 1
> +    if (match(FILENAME, "\\.[ly]$")) {
> +      # FILENAME is a lex or yacc source
> +      is_yacc_or_lex = 1
> +    }
> +    else {
> +      is_yacc_or_lex = 0
> +    }
> +}
> +END {
> +    if (print_idx) {
> +	idx = bug ": "
> +    } else {
> +	idx = ""
> +    }
> +    # Did we do only a partial skip?
> +    for (bug_n_file in skip) {
> +	split (bug_n_file, a, SUBSEP)
> +	bug = a[1]
> +	file = a[2]
> +	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
> +	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +	    b = file " missing " bug
> +	    print_bug(file, 0, "", "internal", file " missing " bug,
> "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file
",
> only found " skipped[bug_n_file])
> +	}
> +    }
> +}
> +
> +
> +# Skip OBSOLETE lines
> +/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
> +
> +# Skip ARI lines
> +
> +BEGIN {
> +    ARI_OK = ""
> +}
> +
> +/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
> +    ARI_OK = gensub(/^.*\/\*
> ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
> +    # print "ARI line found \"" $0 "\""
> +    # print "ARI_OK \"" ARI_OK "\""
> +}
> +! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
> +    ARI_OK = ""
> +}
> +
> +
> +# Things in comments
> +
> +BEGIN { doc["GNU/Linux"] = "\
> +Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux
> system'\'';\
> + comments should clearly differentiate between the two (this test assumes
> that\
> + word `Linux'\'' appears on the same line as the word `GNU'\'' or
> `kernel'\''\
> + or a kernel version"
> +    category["GNU/Linux"] = ari_comment
> +}
> +/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
> +    fail("GNU/Linux")
> +}
> +
> +BEGIN { doc["ARGSUSED"] = "\
> +Do not use ARGSUSED, unnecessary"
> +    category["ARGSUSED"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
> +    fail("ARGSUSED")
> +}
> +
> +
> +# SNIP - Strip out comments - SNIP
> +
> +FNR == 1 {
> +    comment_p = 0
> +}
> +comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p
=
> 0; }
> +comment_p { next; }
> +!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
> +!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
> +
> +
> +BEGIN { doc["_ markup"] = "\
> +All messages should be marked up with _."
> +    category["_ markup"] = ari_gettext
> +}
>
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:spa
> ce:]]*\([^_\(a-z]/ {
> +    if (! /\("%s"/) {
> +	fail("_ markup")
> +    }
> +}
> +
> +BEGIN { doc["trailing new line"] = "\
> +A message should not have a trailing new line"
> +    category["trailing new line"] = ari_gettext
> +}
> +/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
> +    fail("trailing new line")
> +}
> +
> +# Include files for which GDB has a custom version.
> +
> +BEGIN { doc["assert.h"] = "\
> +Do not include assert.h, instead include \"gdb_assert.h\"";
> +    category["assert.h"] = ari_regression
> +    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
> +}
> +/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
> +    fail("assert.h")
> +}
> +
> +BEGIN { doc["dirent.h"] = "\
> +Do not include dirent.h, instead include gdb_dirent.h"
> +    category["dirent.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
> +    fail("dirent.h")
> +}
> +
> +BEGIN { doc["regex.h"] = "\
> +Do not include regex.h, instead include gdb_regex.h"
> +    category["regex.h"] = ari_regression
> +    fix("regex.h", "gdb/gdb_regex.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
> +    fail("regex.h")
> +}
> +
> +BEGIN { doc["xregex.h"] = "\
> +Do not include xregex.h, instead include gdb_regex.h"
> +    category["xregex.h"] = ari_regression
> +    fix("xregex.h", "gdb/gdb_regex.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
> +    fail("xregex.h")
> +}
> +
> +BEGIN { doc["gnu-regex.h"] = "\
> +Do not include gnu-regex.h, instead include gdb_regex.h"
> +    category["gnu-regex.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
> +    fail("gnu regex.h")
> +}
> +
> +BEGIN { doc["stat.h"] = "\
> +Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
> +    category["stat.h"] = ari_regression
> +    fix("stat.h", "gdb/gdb_stat.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
> +|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
> +    fail("stat.h")
> +}
> +
> +BEGIN { doc["wait.h"] = "\
> +Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
> +    fix("wait.h", "gdb/gdb_wait.h", 2);
> +    category["wait.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
> +|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
> +    fail("wait.h")
> +}
> +
> +BEGIN { doc["vfork.h"] = "\
> +Do not include vfork.h, instead include gdb_vfork.h"
> +    fix("vfork.h", "gdb/gdb_vfork.h", 1);
> +    category["vfork.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
> +    fail("vfork.h")
> +}
> +
> +BEGIN { doc["error not internal-warning"] = "\
> +Do not use error(\"internal-warning\"), instead use internal_warning"
> +    category["error not internal-warning"] = ari_regression
> +}
> +/error.*\"[Ii]nternal.warning/ {
> +    fail("error not internal-warning")
> +}
> +
> +BEGIN { doc["%p"] = "\
> +Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
> +target address, or host_address_to_string() for a host address"
> +    category["%p"] = ari_code
> +}
> +/%p/ && !/%prec/ {
> +    fail("%p")
> +}
> +
> +BEGIN { doc["%ll"] = "\
> +Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
> +`long long'\'' value"
> +    category["%ll"] = ari_code
> +}
> +# Allow %ll in scanf
> +/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
> +    fail("%ll")
> +}
> +
> +
> +# SNIP - Strip out strings - SNIP
> +
> +# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
> +FNR == 1 {
> +    string_p = 0
> +    trace_string = 0
> +}
> +# Strip escaped characters.
> +{ gsub(/\\./, "."); }
> +# Strip quoted quotes.
> +{ gsub(/'\''.'\''/, "'\''.'\''"); }
> +# End of multi-line string
> +string_p && /\"/ {
> +    if (trace_string) print "EOS:" FNR, $0;
> +    gsub (/^[^\"]*\"/, "'\''");
> +    string_p = 0;
> +}
> +# Middle of multi-line string, discard line.
> +string_p {
> +    if (trace_string) print "MOS:" FNR, $0;
> +    $0 = ""
> +}
> +# Strip complete strings from the middle of the line
> +!string_p && /\"[^\"]*\"/ {
> +    if (trace_string) print "COS:" FNR, $0;
> +    gsub (/\"[^\"]*\"/, "'\''");
> +}
> +# Start of multi-line string
> +BEGIN { doc["multi-line string"] = "\
> +Multi-line string must have the newline escaped"
> +    category["multi-line string"] = ari_regression
> +}
> +!string_p && /\"/ {
> +    if (trace_string) print "SOS:" FNR, $0;
> +    if (/[^\\]$/) {
> +	fail("multi-line string")
> +    }
> +    gsub (/\"[^\"]*$/, "'\''");
> +    string_p = 1;
> +}
> +# { print }
> +
> +# Multi-line string
> +string_p &&
> +
> +# Accumulate continuation lines
> +FNR == 1 {
> +    cont_p = 0
> +}
> +!cont_p { full_line = ""; }
> +/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1;
next;
> }
> +cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
> +
> +
> +# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
> +
> +BEGIN { doc["PARAMS"] = "\
> +Do not use PARAMS(), ISO C 90 implies prototypes"
> +    category["PARAMS"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
> +    fail("PARAMS")
> +}
> +
> +BEGIN { doc["__func__"] = "\
> +Do not use __func__, ISO C 90 does not support this macro"
> +    category["__func__"] = ari_regression
> +    fix("__func__", "gdb/gdb_assert.h", 1)
> +}
> +/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
> +    fail("__func__")
> +}
> +
> +BEGIN { doc["__FUNCTION__"] = "\
> +Do not use __FUNCTION__, ISO C 90 does not support this macro"
> +    category["__FUNCTION__"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
> +    fail("__FUNCTION__")
> +}
> +
> +BEGIN { doc["__CYGWIN32__"] = "\
> +Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
> +autoconf tests"
> +    category["__CYGWIN32__"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
> +    fail("__CYGWIN32__")
> +}
> +
> +BEGIN { doc["PTR"] = "\
> +Do not use PTR, ISO C 90 implies `void *'\''"
> +    category["PTR"] = ari_regression
> +    #fix("PTR", "gdb/utils.c", 6)
> +}
> +/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
> +    fail("PTR")
> +}
> +
> +BEGIN { doc["UCASE function"] = "\
> +Function name is uppercase."
> +    category["UCASE function"] = ari_code
> +    possible_UCASE = 0
> +    UCASE_full_line = ""
> +}
> +(possible_UCASE) {
> +    if (ARI_OK == "UCASE function") {
> +	possible_UCASE = 0
> +    }
> +    # Closing brace found?
> +    else if (UCASE_full_line ~ \
> +	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
> +	if ((UCASE_full_line ~ \
> +	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
> +	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
> +	    store_FNR = FNR
> +	    FNR = possible_FNR
> +	    store_0 = $0;
> +	    $0 = UCASE_full_line;
> +	    fail("UCASE function")
> +	    FNR = store_FNR
> +	    $0 = store_0;
> +	}
> +	possible_UCASE = 0
> +	UCASE_full_line = ""
> +    } else {
> +	UCASE_full_line = UCASE_full_line $0;
> +    }
> +}
> +/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
> +    possible_UCASE = 1
> +    if (ARI_OK == "UCASE function") {
> +	possible_UCASE = 0
> +    }
> +    possible_FNR = FNR
> +    UCASE_full_line = $0
> +}
> +
> +
> +BEGIN { doc["editCase function"] = "\
> +Function name starts lower case but has uppercased letters."
> +    category["editCase function"] = ari_code
> +    possible_editCase = 0
> +    editCase_full_line = ""
> +}
> +(possible_editCase) {
> +    if (ARI_OK == "ediCase function") {
> +	possible_editCase = 0
> +    }
> +    # Closing brace found?
> +    else if (editCase_full_line ~ \
> +/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
> +	if ((editCase_full_line ~ \
> +/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/)
\
> +	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
> +	    store_FNR = FNR
> +	    FNR = possible_FNR
> +	    store_0 = $0;
> +	    $0 = editCase_full_line;
> +	    fail("editCase function")
> +	    FNR = store_FNR
> +	    $0 = store_0;
> +	}
> +	possible_editCase = 0
> +	editCase_full_line = ""
> +    } else {
> +	editCase_full_line = editCase_full_line $0;
> +    }
> +}
>
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/
> {
> +    possible_editCase = 1
> +    if (ARI_OK == "editCase function") {
> +        possible_editCase = 0
> +    }
> +    possible_FNR = FNR
> +    editCase_full_line = $0
> +}
> +
> +# Only function implementation should be on first column
> +BEGIN { doc["function call in first column"] = "\
> +Function name in first column should be restricted to function
> implementation"
> +    category["function call in first column"] = ari_code
> +}
> +/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
> +    fail("function call in first column")
> +}
> +
> +
> +# Functions without any parameter should have (void)
> +# after their name not simply ().
> +BEGIN { doc["no parameter function"] = "\
> +Function having no parameter should be declared with funcname (void)."
> +    category["no parameter function"] = ari_code
> +}
> +/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
> +    fail("no parameter function")
> +}
> +
> +BEGIN { doc["hash"] = "\
> +Do not use ` #...'\'', instead use `#...'\''(some compilers only
correctly
> \
> +parse a C preprocessor directive when `#'\'' is the first character on \
> +the line)"
> +    category["hash"] = ari_regression
> +}
> +/^[[:space:]]+#/ {
> +    fail("hash")
> +}
> +
> +BEGIN { doc["OP eol"] = "\
> +Do not use &&, or || at the end of a line"
> +    category["OP eol"] = ari_code
> +}
> +/(\|\||\&\&|==|!=)[[:space:]]*$/ {
> +    fail("OP eol")
> +}
> +
> +BEGIN { doc["strerror"] = "\
> +Do not use strerror(), instead use safe_strerror()"
> +    category["strerror"] = ari_regression
> +    fix("strerror", "gdb/gdb_string.h", 1)
> +    fix("strerror", "gdb/mingw-hdep.c", 1)
> +    fix("strerror", "gdb/posix-hdep.c", 1)
> +}
> +/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
> +    fail("strerror")
> +}
> +
> +BEGIN { doc["long long"] = "\
> +Do not use `long long'\'', instead use LONGEST"
> +    category["long long"] = ari_code
> +    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
> +    fix("long long", "gdb/defs.h", 2)
> +}
> +/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
> +    fail("long long")
> +}
> +
> +BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
> +Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror
> and, \
> +consequently, is not able to tolerate false warnings.  Since
-Wunused-param
> \
> +produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
> +are used by GDB"
> +    category["ATTRIBUTE_UNUSED"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
> +    fail("ATTRIBUTE_UNUSED")
> +}
> +
> +BEGIN { doc["ATTR_FORMAT"] = "\
> +Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
> +    category["ATTR_FORMAT"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
> +    fail("ATTR_FORMAT")
> +}
> +
> +BEGIN { doc["ATTR_NORETURN"] = "\
> +Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
> +    category["ATTR_NORETURN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
> +    fail("ATTR_NORETURN")
> +}
> +
> +BEGIN { doc["NORETURN"] = "\
> +Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
> +    category["NORETURN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
> +    fail("NORETURN")
> +}
> +
> +
> +# General problems
> +
> +BEGIN { doc["multiple messages"] = "\
> +Do not use multiple calls to warning or error, instead use a single call"
> +    category["multiple messages"] = ari_gettext
> +}
> +FNR == 1 {
> +    warning_fnr = -1
> +}
> +/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
> +    if (FNR == warning_fnr + 1) {
> +	fail("multiple messages")
> +    } else {
> +	warning_fnr = FNR
> +    }
> +}
> +
> +# Commented out, but left inside sources, just in case.
> +# BEGIN { doc["inline"] = "\
> +# Do not use the inline attribute; \
> +# since the compiler generally ignores this, better algorithm selection \
> +# is needed to improved performance"
> +#    category["inline"] = ari_code
> +# }
> +# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
> +#     fail("inline")
> +# }
> +
> +# This test is obsolete as this type
> +# has been deprecated and finally suppressed from GDB sources
> +#BEGIN { doc["obj_private"] = "\
> +#Replace obj_private with objfile_data"
> +#    category["obj_private"] = ari_obsolete
> +#}
> +#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
> +#    fail("obj_private")
> +#}
> +
> +BEGIN { doc["abort"] = "\
> +Do not use abort, instead use internal_error; GDB should never abort"
> +    category["abort"] = ari_regression
> +    fix("abort", "gdb/utils.c", 3)
> +}
> +/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
> +    fail("abort")
> +}
> +
> +BEGIN { doc["basename"] = "\
> +Do not use basename, instead use lbasename"
> +    category["basename"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
> +    fail("basename")
> +}
> +
> +BEGIN { doc["assert"] = "\
> +Do not use assert, instead use gdb_assert or internal_error; assert \
> +calls abort and GDB should never call abort"
> +    category["assert"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
> +    fail("assert")
> +}
> +
> +BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
> +Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
> +    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
> +    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
> +}
> +
> +BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
> +Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
> +    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
> +    fail("ADD_SHARED_SYMBOL_FILES")
> +}
> +
> +BEGIN { doc["SOLIB_ADD"] = "\
> +Replace SOLIB_ADD with nothing, not needed?"
> +    category["SOLIB_ADD"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
> +    fail("SOLIB_ADD")
> +}
> +
> +BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
> +Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
> +    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
> +    fail("SOLIB_CREATE_INFERIOR_HOOK")
> +}
> +
> +BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
> +Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
> +    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
> +    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
> +}
> +
> +BEGIN { doc["REGISTER_U_ADDR"] = "\
> +Replace REGISTER_U_ADDR with nothing, not needed?"
> +    category["REGISTER_U_ADDR"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
> +    fail("REGISTER_U_ADDR")
> +}
> +
> +BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
> +Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
> +    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
> +    fail("PROCESS_LINENUMBER_HOOK")
> +}
> +
> +BEGIN { doc["PC_SOLIB"] = "\
> +Replace PC_SOLIB with nothing, not needed?"
> +    category["PC_SOLIB"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
> +    fail("PC_SOLIB")
> +}
> +
> +BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
> +Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
> +    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
> +    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
> +}
> +
> +BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
> +Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
> +    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
> +    fail("GCC_COMPILED_FLAG_SYMBOL")
> +}
> +
> +BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
> +Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
> +    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
> +    fail("GCC2_COMPILED_FLAG_SYMBOL")
> +}
> +
> +BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
> +Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
> +    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
> +    fail("FUNCTION_EPILOGUE_SIZE")
> +}
> +
> +BEGIN { doc["HAVE_VFORK"] = "\
> +Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
> +unconditionally"
> +    category["HAVE_VFORK"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
> +    fail("HAVE_VFORK")
> +}
> +
> +BEGIN { doc["bcmp"] = "\
> +Do not use bcmp(), ISO C 90 implies memcmp()"
> +    category["bcmp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
> +    fail("bcmp")
> +}
> +
> +BEGIN { doc["setlinebuf"] = "\
> +Do not use setlinebuf(), ISO C 90 implies setvbuf()"
> +    category["setlinebuf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
> +    fail("setlinebuf")
> +}
> +
> +BEGIN { doc["bcopy"] = "\
> +Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
> +    category["bcopy"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
> +    fail("bcopy")
> +}
> +
> +BEGIN { doc["get_frame_base"] = "\
> +Replace get_frame_base with get_frame_id, get_frame_base_address, \
> +get_frame_locals_address, or get_frame_args_address."
> +    category["get_frame_base"] = ari_obsolete
> +}
> +/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
> +    fail("get_frame_base")
> +}
> +
> +BEGIN { doc["floatformat_to_double"] = "\
> +Do not use floatformat_to_double() from libierty, \
> +instead use floatformat_to_doublest()"
> +    fix("floatformat_to_double", "gdb/doublest.c", 1)
> +    category["floatformat_to_double"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
> +    fail("floatformat_to_double")
> +}
> +
> +BEGIN { doc["floatformat_from_double"] = "\
> +Do not use floatformat_from_double() from libierty, \
> +instead use floatformat_from_doublest()"
> +    category["floatformat_from_double"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
> +    fail("floatformat_from_double")
> +}
> +
> +BEGIN { doc["BIG_ENDIAN"] = "\
> +Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
> +    category["BIG_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
> +    fail("BIG_ENDIAN")
> +}
> +
> +BEGIN { doc["LITTLE_ENDIAN"] = "\
> +Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
> +    category["LITTLE_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
> +    fail("LITTLE_ENDIAN")
> +}
> +
> +BEGIN { doc["BIG_ENDIAN"] = "\
> +Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
> +    category["BIG_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
> +    fail("BIG_ENDIAN")
> +}
> +
> +BEGIN { doc["sec_ptr"] = "\
> +Instead of sec_ptr, use struct bfd_section";
> +    category["sec_ptr"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
> +    fail("sec_ptr")
> +}
> +
> +BEGIN { doc["frame_unwind_unsigned_register"] = "\
> +Replace frame_unwind_unsigned_register with
frame_unwind_register_unsigned"
> +    category["frame_unwind_unsigned_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
> +    fail("frame_unwind_unsigned_register")
> +}
> +
> +BEGIN { doc["frame_register_read"] = "\
> +Replace frame_register_read() with get_frame_register(), or \
> +possibly introduce a new method safe_get_frame_register()"
> +    category["frame_register_read"] = ari_obsolete
> +}
> +/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
> +    fail("frame_register_read")
> +}
> +
> +BEGIN { doc["read_register"] = "\
> +Replace read_register() with regcache_read() et.al."
> +    category["read_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
> +    fail("read_register")
> +}
> +
> +BEGIN { doc["write_register"] = "\
> +Replace write_register() with regcache_read() et.al."
> +    category["write_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
> +    fail("write_register")
> +}
> +
> +function report(name) {
> +    # Drop any trailing _P.
> +    name = gensub(/(_P|_p)$/, "", 1, name)
> +    # Convert to lower case
> +    name = tolower(name)
> +    # Split into category and bug
> +    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
> +    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
> +    # Report it
> +    name = cat " " bug
> +    doc[name] = "Do not use " cat " " bug ", see declaration for details"
> +    category[name] = cat
> +    fail(name)
> +}
> +
>
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|lega
> cy|set_gdbarch_legacy)_/ {
> +    line = $0
> +    # print "0 =", $0
> +    while (1) {
> +	name =
>
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
> ]]*)(.*)$/, "\\2", 1, line)
> +	line =
>
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
> ]]*)(.*)$/, "\\1 \\4", 1, line)
> +	# print "name =", name, "line =", line
> +	if (name == line) break;
> +	report(name)
> +    }
> +}
> +
> +# Count the number of times each architecture method is set
> +/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
> +    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
> +    doc["set " name] = "\
> +Call to set_gdbarch_" name
> +    category["set " name] = ari_gdbarch
> +    fail("set " name)
> +}
> +
> +# Count the number of times each tm/xm/nm macro is defined or undefined
> +/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
> +&&
> !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/
\
> +&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
> +    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
> +    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
> +    name =
> gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/,
"\\2",
> 1, $0)
> +    if (type == basename) {
> +        type = "macro"
> +    }
> +    doc[type " " name] = "\
> +Do not define macros such as " name " in a tm, nm or xm file, \
> +in fact do not provide a tm, nm or xm file"
> +    category[type " " name] = ari_macro
> +    fail(type " " name)
> +}
> +
> +BEGIN { doc["deprecated_registers"] = "\
> +Replace deprecated_registers with nothing, they have reached \
> +end-of-life"
> +    category["deprecated_registers"] = ari_eol
> +}
> +/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
> +    fail("deprecated_registers")
> +}
> +
> +BEGIN { doc["read_pc"] = "\
> +Replace READ_PC() with frame_pc_unwind; \
> +at present the inferior function call code still uses this"
> +    category["read_pc"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
> +    fail("read_pc")
> +}
> +
> +BEGIN { doc["write_pc"] = "\
> +Replace write_pc() with get_frame_base_address or get_frame_id; \
> +at present the inferior function call code still uses this when doing \
> +a DECR_PC_AFTER_BREAK"
> +    category["write_pc"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
> +    fail("write_pc")
> +}
> +
> +BEGIN { doc["generic_target_write_pc"] = "\
> +Replace generic_target_write_pc with a per-architecture implementation, \
> +this relies on PC_REGNUM which is being eliminated"
> +    category["generic_target_write_pc"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
> +    fail("generic_target_write_pc")
> +}
> +
> +BEGIN { doc["read_sp"] = "\
> +Replace read_sp() with frame_sp_unwind"
> +    category["read_sp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
> +    fail("read_sp")
> +}
> +
> +BEGIN { doc["register_cached"] = "\
> +Replace register_cached() with nothing, does not have a regcache
parameter"
> +    category["register_cached"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
> +    fail("register_cached")
> +}
> +
> +BEGIN { doc["set_register_cached"] = "\
> +Replace set_register_cached() with nothing, does not have a regcache
> parameter"
> +    category["set_register_cached"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
> +    fail("set_register_cached")
> +}
> +
> +# Print functions: Use versions that either check for buffer overflow
> +# or safely allocate a fresh buffer.
> +
> +BEGIN { doc["sprintf"] = "\
> +Do not use sprintf, instead use xsnprintf or xstrprintf"
> +    category["sprintf"] = ari_code
> +}
> +/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
> +    fail("sprintf")
> +}
> +
> +BEGIN { doc["vsprintf"] = "\
> +Do not use vsprintf(), instead use xstrvprintf"
> +    category["vsprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
> +    fail("vsprintf")
> +}
> +
> +BEGIN { doc["asprintf"] = "\
> +Do not use asprintf(), instead use xstrprintf()"
> +    category["asprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
> +    fail("asprintf")
> +}
> +
> +BEGIN { doc["vasprintf"] = "\
> +Do not use vasprintf(), instead use xstrvprintf"
> +    fix("vasprintf", "gdb/utils.c", 1)
> +    category["vasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
> +    fail("vasprintf")
> +}
> +
> +BEGIN { doc["xasprintf"] = "\
> +Do not use xasprintf(), instead use xstrprintf"
> +    fix("xasprintf", "gdb/defs.h", 1)
> +    fix("xasprintf", "gdb/utils.c", 1)
> +    category["xasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
> +    fail("xasprintf")
> +}
> +
> +BEGIN { doc["xvasprintf"] = "\
> +Do not use xvasprintf(), instead use xstrvprintf"
> +    fix("xvasprintf", "gdb/defs.h", 1)
> +    fix("xvasprintf", "gdb/utils.c", 1)
> +    category["xvasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
> +    fail("xvasprintf")
> +}
> +
> +# More generic memory operations
> +
> +BEGIN { doc["bzero"] = "\
> +Do not use bzero(), instead use memset()"
> +    category["bzero"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
> +    fail("bzero")
> +}
> +
> +BEGIN { doc["strdup"] = "\
> +Do not use strdup(), instead use xstrdup()";
> +    category["strdup"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
> +    fail("strdup")
> +}
> +
> +BEGIN { doc["strsave"] = "\
> +Do not use strsave(), instead use xstrdup() et.al."
> +    category["strsave"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
> +    fail("strsave")
> +}
> +
> +# String compare functions
> +
> +BEGIN { doc["strnicmp"] = "\
> +Do not use strnicmp(), instead use strncasecmp()"
> +    category["strnicmp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
> +    fail("strnicmp")
> +}
> +
> +# Boolean expressions and conditionals
> +
> +BEGIN { doc["boolean"] = "\
> +Do not use `boolean'\'',  use `int'\'' instead"
> +    category["boolean"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
> +    if (is_yacc_or_lex == 0) {
> +       fail("boolean")
> +    }
> +}
> +
> +BEGIN { doc["false"] = "\
> +Definitely do not use `false'\'' in boolean expressions"
> +    category["false"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
> +    if (is_yacc_or_lex == 0) {
> +       fail("false")
> +    }
> +}
> +
> +BEGIN { doc["true"] = "\
> +Do not try to use `true'\'' in boolean expressions"
> +    category["true"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
> +    if (is_yacc_or_lex == 0) {
> +       fail("true")
> +    }
> +}
> +
> +# Typedefs that are either redundant or can be reduced to `struct
> +# type *''.
> +# Must be placed before if assignment otherwise ARI exceptions
> +# are not handled correctly.
> +
> +BEGIN { doc["d_namelen"] = "\
> +Do not use dirent.d_namelen, instead use NAMELEN"
> +    category["d_namelen"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
> +    fail("d_namelen")
> +}
> +
> +BEGIN { doc["strlen d_name"] = "\
> +Do not use strlen dirent.d_name, instead use NAMELEN"
> +    category["strlen d_name"] = ari_regression
> +}
>
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$
> )/ {
> +    fail("strlen d_name")
> +}
> +
> +BEGIN { doc["var_boolean"] = "\
> +Replace var_boolean with add_setshow_boolean_cmd"
> +    category["var_boolean"] = ari_regression
> +    fix("var_boolean", "gdb/command.h", 1)
> +    # fix only uses the last directory level
> +    fix("var_boolean", "cli/cli-decode.c", 2)
> +}
> +/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
> +    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
> +	fail("var_boolean")
> +    }
> +}
> +
> +BEGIN { doc["generic_use_struct_convention"] = "\
> +Replace generic_use_struct_convention with nothing, \
> +EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
> +    category["generic_use_struct_convention"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
> +    fail("generic_use_struct_convention")
> +}
> +
> +BEGIN { doc["if assignment"] = "\
> +An IF statement'\''s expression contains an assignment (the GNU coding \
> +standard discourages this)"
> +    category["if assignment"] = ari_code
> +}
> +BEGIN { doc["if clause more than 50 lines"] = "\
> +An IF statement'\''s expression expands over 50 lines"
> +    category["if clause more than 50 lines"] = ari_code
> +}
> +#
> +# Accumulate continuation lines
> +FNR == 1 {
> +    in_if = 0
> +}
> +
> +/(^|[^_[:alnum:]])if / {
> +    in_if = 1;
> +    if_brace_level = 0;
> +    if_cont_p = 0;
> +    if_count = 0;
> +    if_brace_end_pos = 0;
> +    if_full_line = "";
> +}
> +(in_if)  {
> +    # We want everything up to closing brace of same level
> +    if_count++;
> +    if (if_count > 50) {
> +	print "multiline if: " if_full_line $0
> +	fail("if clause more than 50 lines")
> +	if_brace_level = 0;
> +	if_full_line = "";
> +    } else {
> +	if (if_count == 1) {
> +	    i = index($0,"if ");
> +	} else {
> +	    i = 1;
> +	}
> +	for (i=i; i <= length($0); i++) {
> +	    char = substr($0,i,1);
> +	    if (char == "(") { if_brace_level++; }
> +	    if (char == ")") {
> +		if_brace_level--;
> +		if (!if_brace_level) {
> +		    if_brace_end_pos = i;
> +		    after_if = substr($0,i+1,length($0));
> +		    # Do not parse what is following
> +		    break;
> +		}
> +	    }
> +	}
> +	if (if_brace_level == 0) {
> +	    $0 = substr($0,1,i);
> +	    in_if = 0;
> +	} else {
> +	    if_full_line = if_full_line $0;
> +	    if_cont_p = 1;
> +	    next;
> +	}
> +    }
> +}
> +# if we arrive here, we need to concatenate, but we are at brace level 0
> +
> +(if_brace_end_pos) {
> +    $0 = if_full_line substr($0,1,if_brace_end_pos);
> +    if (if_count > 1) {
> +	# print "IF: multi line " if_count " found at " FILENAME ":" FNR "
> \"" $0 "\""
> +    }
> +    if_cont_p = 0;
> +    if_full_line = "";
> +}
> +/(^|[^_[:alnum:]])if .* = / {
> +    # print "fail in if " $0
> +    fail("if assignment")
> +}
> +(if_brace_end_pos) {
> +    $0 = $0 after_if;
> +    if_brace_end_pos = 0;
> +    in_if = 0;
> +}
> +
> +# Printout of all found bug
> +
> +BEGIN {
> +    if (print_doc) {
> +	for (bug in doc) {
> +	    fail(bug)
> +	}
> +	exit
> +    }
> +}' "$@"
> +
> Index: contrib/ari/gdb_find.sh
> ===================================================================
> RCS file: contrib/ari/gdb_find.sh
> diff -N contrib/ari/gdb_find.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/gdb_find.sh	18 May 2012 22:31:42 -0000
> @@ -0,0 +1,41 @@
> +#!/bin/sh
> +
> +# GDB script to create list of files to check using gdb_ari.sh.
> +#
> +# Copyright (C) 2003-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# Make certain that the script is not running in an internationalized
> +# environment.
> +
> +LANG=C ; export LANG
> +LC_ALL=C ; export LC_ALL
> +
> +
> +# A find that prunes files that GDB users shouldn't be interested in.
> +# Use sort to order files alphabetically.
> +
> +find "$@" \
> +    -name testsuite -prune -o \
> +    -name gdbserver -prune -o \
> +    -name gnulib -prune -o \
> +    -name osf-share -prune -o \
> +    -name '*-stub.c' -prune -o \
> +    -name '*-exp.c' -prune -o \
> +    -name ada-lex.c -prune -o \
> +    -name cp-name-parser.c -prune -o \
> +    -type f -name '*.[lyhc]' -print | sort
> Index: contrib/ari/update-web-ari.sh
> ===================================================================
> RCS file: contrib/ari/update-web-ari.sh
> diff -N contrib/ari/update-web-ari.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/update-web-ari.sh	18 May 2012 22:31:43 -0000
> @@ -0,0 +1,947 @@
> +#!/bin/sh -x
> +
> +# GDB script to create GDB ARI web page.
> +#
> +# Copyright (C) 2001-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# TODO: setjmp.h, setjmp and longjmp.
> +
> +# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
> +exec 3>&2 2>&1
> +ECHO ()
> +{
> +#   echo "$@" | tee /dev/fd/3 1>&2
> +    echo "$@" 1>&2
> +    echo "$@" 1>&3
> +}
> +
> +# Really mindless usage
> +if test $# -ne 4
> +then
> +    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>"
1>&2
> +    exit 1
> +fi
> +snapshot=$1 ; shift
> +tmpdir=$1 ; shift
> +wwwdir=$1 ; shift
> +project=$1 ; shift
> +
> +# Try to create destination directory if it doesn't exist yet
> +if [ ! -d ${wwwdir} ]
> +then
> +  mkdir -p ${wwwdir}
> +fi
> +
> +# Fail if destination directory doesn't exist or is not writable
> +if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
> +then
> +  echo ERROR: Can not write to directory ${wwwdir} >&2
> +  exit 2
> +fi
> +
> +if [ ! -r ${snapshot} ]
> +then
> +    echo ERROR: Can not read snapshot file 1>&2
> +    exit 1
> +fi
> +
> +# FILE formats
> +# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +# Where ``*'' is {source,warning,indent,doschk}
> +
> +unpack_source_p=true
> +delete_source_p=true
> +
> +check_warning_p=false # broken
> +check_indent_p=false # too slow, too many fail
> +check_source_p=true
> +check_doschk_p=true
> +check_werror_p=true
> +
> +update_doc_p=true
> +update_web_p=true
> +
> +if [ ! -z "$send_email" ]
> +then
> +  send_email=false
> +fi
> +
> +if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
> +then
> +  AWK=awk
> +else
> +  AWK=gawk
> +fi
> +
> +
> +# Set up a few cleanups
> +if ${delete_source_p}
> +then
> +    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
> +fi
> +
> +
> +# If the first parameter is a directory,
> +#we just use it as the extracted source
> +if [ -d ${snapshot} ]
> +then
> +  module=${project}
> +  srcdir=${snapshot}
> +  aridir=${srcdir}/${module}/ari
> +  unpack_source_p=false
> +  delete_source_p=false
> +  version_in=${srcdir}/${module}/version.in
> +else
> +  # unpack the tar-ball
> +  if ${unpack_source_p}
> +  then
> +    # Was it previously unpacked?
> +    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
> +    then
> +	/bin/rm -rf "${tmpdir}"
> +	/bin/mkdir -p ${tmpdir}
> +	if [ ! -d ${tmpdir} ]
> +	then
> +	    echo "Problem creating work directory"
> +	    exit 1
> +	fi
> +	cd ${tmpdir} || exit 1
> +	echo `date`: Unpacking tar-ball ...
> +	case ${snapshot} in
> +	    *.tar.bz2 ) bzcat ${snapshot} ;;
> +	    *.tar ) cat ${snapshot} ;;
> +	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
> +	esac | tar xf -
> +    fi
> +  fi
> +
> +  module=`basename ${snapshot}`
> +  module=`basename ${module} .bz2`
> +  module=`basename ${module} .tar`
> +  srcdir=`echo ${tmpdir}/${module}*`
> +  aridir=${HOME}/ss
> +  version_in=${srcdir}/gdb/version.in
> +fi
> +
> +if [ ! -r ${version_in} ]
> +then
> +    echo ERROR: missing version file 1>&2
> +    exit 1
> +fi
> +version=`cat ${version_in}`
> +
> +
> +# THIS HAS SUFFERED BIT ROT
> +if ${check_warning_p} && test -d "${srcdir}"
> +then
> +    echo `date`: Parsing compiler warnings 1>&2
> +    cat ${root}/ari.compile | $AWK '
> +BEGIN {
> +    FS=":";
> +}
> +/^[^:]*:[0-9]*: warning:/ {
> +  file = $1;
> +  #sub (/^.*\//, "", file);
> +  warning[file] += 1;
> +}
> +/^[^:]*:[0-9]*: error:/ {
> +  file = $1;
> +  #sub (/^.*\//, "", file);
> +  error[file] += 1;
> +}
> +END {
> +  for (file in warning) {
> +    print file ":warning:" level[file]
> +  }
> +  for (file in error) {
> +    print file ":error:" level[file]
> +  }
> +}
> +' > ${root}/ari.warning.bug
> +fi
> +
> +# THIS HAS SUFFERED BIT ROT
> +if ${check_indent_p} && test -d "${srcdir}"
> +then
> +    printf "Analizing file indentation:" 1>&2
> +    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while
> read f
> +    do
> +	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s -
> ${f}
> +	then
> +	    :
> +	else
> +	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +	    echo "${f}:0: info: indent: Indentation does not match GNU
> indent output"
> +	fi
> +    done ) > ${wwwdir}/ari.indent.bug
> +    echo ""
> +fi
> +
> +if ${check_source_p} && test -d "${srcdir}"
> +then
> +    bugf=${wwwdir}/ari.source.bug
> +    oldf=${wwwdir}/ari.source.old
> +    srcf=${wwwdir}/ari.source.lines
> +    oldsrcf=${wwwdir}/ari.source.lines-old
> +
> +    diff=${wwwdir}/ari.source.diff
> +    diffin=${diff}-in
> +    newf1=${bugf}1
> +    oldf1=${oldf}1
> +    oldpruned=${oldf1}-pruned
> +    newpruned=${newf1}-pruned
> +
> +    cp -f ${bugf} ${oldf}
> +    cp -f ${srcf} ${oldsrcf}
> +    rm -f ${srcf}
> +    node=`uname -n`
> +    echo "`date`: Using source lines ${srcf}" 1>&2
> +    echo "`date`: Checking source code" 1>&2
> +    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
> +	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx
> --src=${srcf}
> +    ) > ${bugf}
> +    # Remove things we are not interested in to signal by email
> +    # gdbarch changes are not important here
> +    # Also convert ` into ' to avoid command substitution in script below
> +    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
> +    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
> +    # Remove line number info so that code inclusion/deletion
> +    # has no impact on the result
> +    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} >
${oldpruned}
> +    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} >
${newpruned}
> +    # Use diff without option to get normal diff output that
> +    # is reparsed after
> +    diff ${oldpruned} ${newpruned} > ${diffin}
> +    # Only keep new warnings
> +    sed -n -e "/^>.*/p" ${diffin} > ${diff}
> +    sedscript=${wwwdir}/sedscript
> +    script=${wwwdir}/script
> +    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
> +	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
> +	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
> +	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
> +	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
> +	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
> +	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
> +	${diffin} > ${sedscript}
> +    ${SHELL} ${sedscript} > ${wwwdir}/message
> +    sed -n \
> +	-e "s;\(.*\);echo \\\"\1\\\";p" \
> +	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
> +	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
> +	${wwwdir}/message > ${script}
> +    ${SHELL} ${script} > ${wwwdir}/mail-message
> +    if [ "x${branch}" != "x" ]; then
> +	email_suffix="`date` in ${branch}"
> +    else
> +	email_suffix="`date`"
> +    fi
> +
> +    if [ "$send_email" == "true" ]; then
> +      if [ "${node}" = "sourceware.org" ]; then
> +	warning_email=gdb-patches@sourceware.org
> +      else
> +        # Use default email
> +	warning_email=${USER}@${node}
> +      fi
> +
> +      # Check if ${diff} is not empty
> +      if [ -s ${diff} ]; then
> +	# Send an email $warning_email
> +	mutt -s "New ARI warning ${email_suffix}" \
> +	    ${warning_email} < ${wwwdir}/mail-message
> +      else
> +        if [ -s ${wwwdir}/${mail-message} ]; then
> +	  # Send an email to $warning_email
> +	  mutt -s "ARI warning list change ${email_suffix}" \
> +	    ${warning_email} < ${wwwdir}/mail-message
> +        fi
> +      fi
> +    fi
> +fi
> +
> +
> +
> +
> +if ${check_doschk_p} && test -d "${srcdir}"
> +then
> +    echo "`date`: Checking for doschk" 1>&2
> +    rm -f "${wwwdir}"/ari.doschk.*
> +    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
> +    fnchange_awk="${wwwdir}"/ari.doschk.awk
> +    doschk_in="${wwwdir}"/ari.doschk.in
> +    doschk_out="${wwwdir}"/ari.doschk.out
> +    doschk_bug="${wwwdir}"/ari.doschk.bug
> +    doschk_char="${wwwdir}"/ari.doschk.char
> +
> +    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
> +    # does a textual substitution of each file name using the list.
> +    # Generate an awk script that does the equivalent - matches an
> +    # exact line and then outputs the replacement.
> +
> +    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" {
print
> "\2"\; next\; };' \
> +	< "${fnchange_lst}" > "${fnchange_awk}"
> +    echo '{ print }' >> "${fnchange_awk}"
> +
> +    # Do the raw analysis - transform the list of files into the DJGPP
> +    # equivalents putting it in the .in file
> +    ( cd "${srcdir}" && find * \
> +	-name '*.info-[0-9]*' -prune \
> +	-o -name tcl -prune \
> +	-o -name itcl -prune \
> +	-o -name tk -prune \
> +	-o -name libgui -prune \
> +	-o -name tix -prune \
> +	-o -name dejagnu -prune \
> +	-o -name expect -prune \
> +	-o -type f -print ) \
> +    | $AWK -f ${fnchange_awk} > ${doschk_in}
> +
> +    # Start with a clean slate
> +    rm -f ${doschk_bug}
> +
> +    # Check for any invalid characters.
> +    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
> +    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +    sed < ${doschk_char} >> ${doschk_bug} \
> +	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
> +
> +    # Magic to map ari.doschk.out to ari.doschk.bug goes here
> +    doschk < ${doschk_in} > ${doschk_out}
> +    cat ${doschk_out} | $AWK >> ${doschk_bug} '
> +BEGIN {
> +    state = 1;
> +    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";
> category[invalid_dos] = "dos";
> +    same_dos = state++;    bug[same_dos]    = "DOS 8.3";
> category[same_dos] = "dos";
> +    same_sysv = state++;   bug[same_sysv]   = "SysV";
> +    long_sysv = state++;   bug[long_sysv]   = "long SysV";
> +    internal = state++;    bug[internal]    = "internal doschk";
> category[internal] = "internal";
> +    state = 0;
> +}
> +/^$/ { state = 0; next; }
> +/^The .* not valid DOS/     { state = invalid_dos; next; }
> +/^The .* same DOS/          { state = same_dos; next; }
> +/^The .* same SysV/         { state = same_sysv; next; }
> +/^The .* too long for SysV/ { state = long_sysv; next; }
> +/^The .* /                  { state = internal; next; }
> +
> +NF == 0 { next }
> +
> +NF == 3 { name = $1 ; file = $3 }
> +NF == 1 { file = $1 }
> +NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
> +
> +state == same_dos {
> +    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +    print  file ":0: " category[state] ": " \
> +	name " " bug[state] " " " dup: " \
> +	" DOSCHK - the names " name " and " file " resolve to the same" \
> +	" file on a " bug[state] \
> +	" system.<br>For DOS, this can be fixed by modifying the file" \
> +	" fnchange.lst."
> +    next
> +}
> +state == invalid_dos {
> +    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
> +    print file ":0: " category[state] ": "  name ": DOSCHK - " name
> +    next
> +}
> +state == internal {
> +    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
> +    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
> +	bug[state] " problem"
> +}
> +'
> +fi
> +
> +
> +
> +if ${check_werror_p} && test -d "${srcdir}"
> +then
> +    echo "`date`: Checking Makefile.in for non- -Werror rules"
> +    rm -f ${wwwdir}/ari.werror.*
> +    cat "${srcdir}/${project}/Makefile.in" | $AWK >
> ${wwwdir}/ari.werror.bug '
> +BEGIN {
> +    count = 0
> +    cont_p = 0
> +    full_line = ""
> +}
> +/^[-_[:alnum:]]+\.o:/ {
> +    file = gensub(/.o:.*/, "", 1) ".c"
> +}
> +
> +/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1;
next;
> }
> +cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
> +
> +/\$\(COMPILE\.pre\)/ {
> +    print file " has  line " $0
> +    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~
/\$\(INTERNAL_CFLAGS\)/))
> {
> +	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +	print "'"${project}"'/" file ":0: info: Werror: The file is not
> being compiled with -Werror"
> +    }
> +}
> +'
> +fi
> +
> +
> +# From the warnings, generate the doc and indexed bug files
> +if ${update_doc_p}
> +then
> +    cd ${wwwdir}
> +    rm -f ari.doc ari.idx ari.doc.bug
> +    # Generate an extra file containing all the bugs that the ARI can
> detect.
> +    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >>
> ari.doc.bug
> +    cat ari.*.bug | $AWK > ari.idx '
> +BEGIN {
> +    FS=": *"
> +}
> +{
> +    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +    file = $1
> +    line = $2
> +    category = $3
> +    bug = $4
> +    if (! (bug in cat)) {
> +	cat[bug] = category
> +	# strip any trailing .... (supplement)
> +	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
> +	count[bug] = 0
> +    }
> +    if (file != "") {
> +	count[bug] += 1
> +	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +	print bug ":" file ":" category
> +    }
> +    # Also accumulate some categories as obsolete
> +    if (category == "deprecated") {
> +	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +	if (file != "") {
> +	    print category ":" file ":" "obsolete"
> +	}
> +	#count[category]++
> +	#doc[category] = "Contains " category " code"
> +    }
> +}
> +END {
> +    i = 0;
> +    for (bug in count) {
> +	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
> +    }
> +}
> +'
> +fi
> +
> +
> +# print_toc BIAS MIN_COUNT CATEGORIES TITLE
> +
> +# Print a table of contents containing the bugs CATEGORIES.  If the
> +# BUG count >= MIN_COUNT print it in the table-of-contents.  If
> +# MIN_COUNT is non -ve, also include a link to the table.Adjust the
> +# printed BUG count by BIAS.
> +
> +all=
> +
> +print_toc ()
> +{
> +    bias="$1" ; shift
> +    min_count="$1" ; shift
> +
> +    all=" $all $1 "
> +    categories=""
> +    for c in $1; do
> +	categories="${categories} categories[\"${c}\"] = 1 ;"
> +    done
> +    shift
> +
> +    title="$@" ; shift
> +
> +    echo "<p>" >> ${newari}
> +    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
> +    echo "<h3>${title}</h3>" >> ${newari}
> +    cat >> ${newari} # description
> +
> +    cat >> ${newari} <<EOF
> +<p>
> +<table>
> +<tr><th align=left>BUG</th><th>Total</th><th
> align=left>Description</th></tr>
> +EOF
> +    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +    cat ${wwwdir}/ari.doc \
> +    | sort -t: +1rn -2 +0d \
> +    | $AWK >> ${newari} '
> +BEGIN {
> +    FS=":"
> +    '"$categories"'
> +    MIN_COUNT = '${min_count}'
> +    BIAS = '${bias}'
> +    total = 0
> +    nr = 0
> +}
> +{
> +    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +    bug = $1
> +    count = $2
> +    category = $3
> +    doc = $4
> +    if (count < MIN_COUNT) next
> +    if (!(category in categories)) next
> +    nr += 1
> +    total += count
> +    printf "<tr>"
> +    printf "<th align=left valign=top><a name=\"%s\">", bug
> +    printf "%s", gensub(/_/, " ", "g", bug)
> +    printf "</a></th>"
> +    printf "<td align=right valign=top>"
> +    if (count > 0 && MIN_COUNT >= 0) {
> +	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
> +    } else {
> +	printf "%d", count + BIAS
> +    }
> +    printf "</td>"
> +    printf "<td align=left valign=top>%s</td>", doc
> +    printf "</tr>"
> +    print ""
> +}
> +END {
> +    print "<tr><th align=right valign=top>" nr "</th><th align=right
> valign=top>" total "</th><td></td></tr>"
> +}
> +'
> +cat >> ${newari} <<EOF
> +</table>
> +<p>
> +EOF
> +}
> +
> +
> +print_table ()
> +{
> +    categories=""
> +    for c in $1; do
> +	categories="${categories} categories[\"${c}\"] = 1 ;"
> +    done
> +    # Remember to prune the dir prefix from projects files
> +    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
> +function qsort (table,
> +		middle, tmp, left, nr_left, right, nr_right, result) {
> +    middle = ""
> +    for (middle in table) { break; }
> +    nr_left = 0;
> +    nr_right = 0;
> +    for (tmp in table) {
> +	if (tolower(tmp) < tolower(middle)) {
> +	    nr_left++
> +	    left[tmp] = tmp
> +	} else if (tolower(tmp) > tolower(middle)) {
> +	    nr_right++
> +	    right[tmp] = tmp
> +	}
> +    }
> +    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
> +    result = ""
> +    if (nr_left > 0) {
> +	result = qsort(left) SUBSEP
> +    }
> +    result = result middle
> +    if (nr_right > 0) {
> +	result = result SUBSEP qsort(right)
> +    }
> +    return result
> +}
> +function print_heading (where, bug_i) {
> +    print ""
> +    print "<tr border=1>"
> +    print "<th align=left>File</th>"
> +    print "<th align=left><em>Total</em></th>"
> +    print "<th></th>"
> +    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> +	bug = i2bug[bug_i];
> +	printf "<th>"
> +	# The title names are offset by one.  Otherwize, when the browser
> +	# jumps to the name it leaves out half the relevant column.
> +	#printf "<a name=\",%s\">&nbsp;</a>", bug
> +	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
> +	printf "<a href=\"#%s\">", bug
> +	printf "%s", gensub(/_/, " ", "g", bug)
> +	printf "</a>\n"
> +	printf "</th>\n"
> +    }
> +    #print "<th></th>"
> +    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
> +    print "<th align=left><em>Total</em></th>"
> +    print "<th align=left>File</th>"
> +    print "</tr>"
> +}
> +function print_totals (where, bug_i) {
> +    print "<th align=left><em>Totals</em></th>"
> +    printf "<th align=right>"
> +    printf "<em>%s</em>", total
> +    printf "&gt;"
> +    printf "</th>\n"
> +    print "<th></th>";
> +    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> +	bug = i2bug[bug_i];
> +	printf "<th align=right>"
> +	printf "<em>"
> +	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
> +	printf "</em>";
> +	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
> +	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
> +	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
> +	printf "</th>";
> +	print ""
> +    }
> +    print "<th></th>"
> +    printf "<th align=right>"
> +    printf "<em>%s</em>", total
> +    printf "&lt;"
> +    printf "</th>\n"
> +    print "<th align=left><em>Totals</em></th>"
> +    print "</tr>"
> +}
> +BEGIN {
> +    FS = ":"
> +    '"${categories}"'
> +    nr_file = 0;
> +    nr_bug = 0;
> +}
> +{
> +    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +    bug = $1
> +    file = $2
> +    category = $3
> +    # Interested in this
> +    if (!(category in categories)) next
> +    # Totals
> +    db[bug, file] += 1
> +    bug_total[bug] += 1
> +    file_total[file] += 1
> +    total += 1
> +}
> +END {
> +
> +    # Sort the files and bugs creating indexed lists.
> +    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
> +    nr_file = split(qsort(file_total), i2file, SUBSEP);
> +
> +    # Dummy entries for first/last
> +    i2file[0] = 0
> +    i2file[-1] = -1
> +    i2bug[0] = 0
> +    i2bug[-1] = -1
> +
> +    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
> +    # are used to identify the start/end of the cycle.  Consequently,
> +    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
> +    # of end is the start).
> +
> +    # For all the bugs, create a cycle that goes to the prev / next file.
> +    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> +	bug = i2bug[bug_i]
> +	prev = 0
> +	prev_file[bug, 0] = -1
> +	next_file[bug, -1] = 0
> +	for (file_i = 1; file_i <= nr_file; file_i++) {
> +	    file = i2file[file_i]
> +	    if ((bug, file) in db) {
> +		prev_file[bug, file] = prev
> +		next_file[bug, prev] = file
> +		prev = file
> +	    }
> +	}
> +	prev_file[bug, -1] = prev
> +	next_file[bug, prev] = -1
> +    }
> +
> +    # For all the files, create a cycle that goes to the prev / next bug.
> +    for (file_i = 1; file_i <= nr_file; file_i++) {
> +	file = i2file[file_i]
> +	prev = 0
> +	prev_bug[file, 0] = -1
> +	next_bug[file, -1] = 0
> +	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> +	    bug = i2bug[bug_i]
> +	    if ((bug, file) in db) {
> +		prev_bug[file, bug] = prev
> +		next_bug[file, prev] = bug
> +		prev = bug
> +	    }
> +	}
> +	prev_bug[file, -1] = prev
> +	next_bug[file, prev] = -1
> +    }
> +
> +    print "<table border=1 cellspacing=0>"
> +    print "<tr></tr>"
> +    print_heading(0);
> +    print "<tr></tr>"
> +    print_totals(0);
> +    print "<tr></tr>"
> +
> +    for (file_i = 1; file_i <= nr_file; file_i++) {
> +	file = i2file[file_i];
> +	pfile = gensub(/^'${project}'\//, "", 1, file)
> +	print ""
> +	print "<tr>"
> +	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
> +	printf "<th align=right>"
> +	printf "%s", file_total[file]
> +	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
> +	printf "</th>\n"
> +	print "<th></th>"
> +	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> +	    bug = i2bug[bug_i];
> +	    if ((bug, file) in db) {
> +		printf "<td align=right>"
> +		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
> +		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
> +		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
> +		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
> +		printf "</td>"
> +		print ""
> +	    } else {
> +		print "<td>&nbsp;</td>"
> +		#print "<td></td>"
> +	    }
> +	}
> +	print "<th></th>"
> +	printf "<th align=right>"
> +	printf "%s", file_total[file]
> +	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
> +	printf "</th>\n"
> +	print "<th align=left>" pfile "</th>"
> +	print "</tr>"
> +    }
> +
> +    print "<tr></tr>"
> +    print_totals(-1)
> +    print "<tr></tr>"
> +    print_heading(-1);
> +    print "<tr></tr>"
> +    print ""
> +    print "</table>"
> +    print ""
> +}
> +'
> +}
> +
> +
> +# Make the scripts available
> +cp ${aridir}/gdb_*.sh ${wwwdir}
> +
> +# Compute the ARI index - ratio of zero vs non-zero problems.
> +indexes=`awk '
> +BEGIN {
> +    FS=":"
> +}
> +{
> +    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +    bug = $1; count = $2; category = $3; doc = $4
> +
> +    if (bug ~ /^legacy_/) legacy++
> +    if (bug ~ /^deprecated_/) deprecated++
> +
> +    if (category !~ /^gdbarch$/) {
> +	bugs += count
> +    }
> +    if (count == 0) {
> +	oks++
> +    }
> +}
> +END {
> +    #print "tests/ok:", nr / ok
> +    #print "bugs/tests:", bugs / nr
> +    #print "bugs/ok:", bugs / ok
> +    print bugs / ( oks + legacy + deprecated )
> +}
> +' ${wwwdir}/ari.doc`
> +
> +# Merge, generating the ARI tables.
> +if ${update_web_p}
> +then
> +    echo "Create the ARI table" 1>&2
> +    oldari=${wwwdir}/old.html
> +    ari=${wwwdir}/index.html
> +    newari=${wwwdir}/new.html
> +    rm -f ${newari} ${newari}.gz
> +    cat <<EOF >> ${newari}
> +<html>
> +<head>
> +<title>A.R. Index for GDB version ${version}</title>
> +</head>
> +<body>
> +
> +<center><h2>A.R. Index for GDB version ${version}<h2></center>
> +
> +<!-- body, update above using ../index.sh -->
> +
> +<!-- Navigation.  This page contains the following anchors.
> +"BUG": The definition of the bug.
> +"FILE,BUG": The row/column containing FILEs BUG count
> +"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
> +"FILE,O", "FILE,-1": The left/right total for FILEs row.
> +",BUG": The top title for BUGs column.
> +"FILE,": The left title for FILEs row.
> +-->
> +
> +<center><h3>${indexes}</h3></center>
> +<center><h3>You can not take this seriously!</h3></center>
> +
> +<center>
> +Also available:
> +<a href="../gdb/ari/">most recent branch</a>
> +|
> +<a href="../gdb/current/ari/">current</a>
> +|
> +<a href="../gdb/download/ari/">last release</a>
> +</center>
> +
> +<center>
> +Last updated: `date -u`
> +</center>
> +EOF
> +
> +    print_toc 0 1 "internal regression" Critical <<EOF
> +Things previously eliminated but returned.  This should always be empty.
> +EOF
> +
> +    print_table "regression code comment obsolete gettext"
> +
> +    print_toc 0 0 code Code <<EOF
> +Coding standard problems, portability problems, readability problems.
> +EOF
> +
> +    print_toc 0 0 comment Comments <<EOF
> +Problems concerning comments in source files.
> +EOF
> +
> +    print_toc 0 0 gettext GetText <<EOF
> +Gettext related problems.
> +EOF
> +
> +    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
> +File names with problems on 8.3 file systems.
> +EOF
> +
> +    print_toc -2 -1 deprecated Deprecated <<EOF
> +Mechanisms that have been replaced with something better, simpler,
> +cleaner; or are no longer required by core-GDB.  New code should not
> +use deprecated mechanisms.  Existing code, when touched, should be
> +updated to use non-deprecated mechanisms.  See obsolete and deprecate.
> +(The declaration and definition are hopefully excluded from count so
> +zero should indicate no remaining uses).
> +EOF
> +
> +    print_toc 0 0 obsolete Obsolete <<EOF
> +Mechanisms that have been replaced, but have not yet been marked as
> +such (using the deprecated_ prefix).  See deprecate and deprecated.
> +EOF
> +
> +    print_toc 0 -1 deprecate Deprecate <<EOF
> +Mechanisms that are a candidate for being made obsolete.  Once core
> +GDB no longer depends on these mechanisms and/or there is a
> +replacement available, these mechanims can be deprecated (adding the
> +deprecated prefix) obsoleted (put into category obsolete) or deleted.
> +See obsolete and deprecated.
> +EOF
> +
> +    print_toc -2 -1 legacy Legacy <<EOF
> +Methods used to prop up targets using targets that still depend on
> +deprecated mechanisms. (The method's declaration and definition are
> +hopefully excluded from count).
> +EOF
> +
> +    print_toc -2 -1 gdbarch Gdbarch <<EOF
> +Count of calls to the gdbarch set methods.  (Declaration and
> +definition hopefully excluded from count).
> +EOF
> +
> +    print_toc 0 -1 macro Macro <<EOF
> +Breakdown of macro definitions (and #undef) in configuration files.
> +EOF
> +
> +    print_toc 0 0 regression Fixed <<EOF
> +Problems that have been expunged from the source code.
> +EOF
> +
> +    # Check for invalid categories
> +    for a in $all; do
> +	alls="$alls all[$a] = 1 ;"
> +    done
> +    cat ari.*.doc | $AWK >> ${newari} '
> +BEGIN {
> +    FS = ":"
> +    '"$alls"'
> +}
> +{
> +    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +    bug = $1
> +    count = $2
> +    category = $3
> +    doc = $4
> +    if (!(category in all)) {
> +	print "<b>" category "</b>: no documentation<br>"
> +    }
> +}
> +'
> +
> +    cat >> ${newari} <<EOF
> +<center>
> +Input files:
> +`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
> +do
> +    echo "<a href=\"${f}\">${f}</a>"
> +done`
> +</center>
> +
> +<center>
> +Scripts:
> +`( cd ${wwwdir} && ls *.sh ) | while read f
> +do
> +    echo "<a href=\"${f}\">${f}</a>"
> +done`
> +</center>
> +
> +<!-- /body, update below using ../index.sh -->
> +</body>
> +</html>
> +EOF
> +
> +    for i in . .. ../..; do
> +	x=${wwwdir}/${i}/index.sh
> +	if test -x $x; then
> +	    $x ${newari}
> +	    break
> +	fi
> +    done
> +
> +    gzip -c -v -9 ${newari} > ${newari}.gz
> +
> +    cp ${ari} ${oldari}
> +    cp ${ari}.gz ${oldari}.gz
> +    cp ${newari} ${ari}
> +    cp ${newari}.gz ${ari}.gz
> +
> +fi # update_web_p
> +
> +# ls -l ${wwwdir}
> +
> +exit 0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-18 22:41 [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory Pierre Muller
  2012-05-25  8:09 ` PING " Pierre Muller
@ 2012-05-25 19:47 ` Jan Kratochvil
  2012-05-26 12:41   ` [RFA-v2] " Pierre Muller
  2012-05-26  0:12 ` [RFA] " Sergio Durigan Junior
  2 siblings, 1 reply; 32+ messages in thread
From: Jan Kratochvil @ 2012-05-25 19:47 UTC (permalink / raw)
  To: Pierre Muller; +Cc: gdb-patches

On Sat, 19 May 2012 00:40:24 +0200, Pierre Muller wrote:
>   Here is a RFA for inclusion of scripts to gdb/contrib/ari.

The patch is corrupted by line wrapping, 48 lines and some are not trivial to
recover.


Thanks,
Jan


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-18 22:41 [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory Pierre Muller
  2012-05-25  8:09 ` PING " Pierre Muller
  2012-05-25 19:47 ` Jan Kratochvil
@ 2012-05-26  0:12 ` Sergio Durigan Junior
  2 siblings, 0 replies; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-05-26  0:12 UTC (permalink / raw)
  To: Pierre Muller; +Cc: gdb-patches

On Friday, May 18 2012, Pierre Muller wrote:

>   Here is a RFA for inclusion of scripts to gdb/contrib/ari.

As Jan pointed out, the patch does not apply.

I am assuming that, if you are asking for opinions, then you are
volunteering to fix the ARI scripts :-).  Here are my opinions.  Thanks
a lot for doing this!

> Index: contrib/ari/create-web-ari-in-src.sh
> ===================================================================
> RCS file: contrib/ari/create-web-ari-in-src.sh
> diff -N contrib/ari/create-web-ari-in-src.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/create-web-ari-in-src.sh	18 May 2012 22:31:42 -0000

Is this script called in some cronjob, or is it intended to be called
directly by the user?  I don't see it being called anywhere in the
sources.  If it is called from a cronjob, then maybe it's worth
providing an example of a simple crontab script which would work for
this purpose.

Also, if it is supposed to be called by the user, then I think it should
accept command line arguments.

> @@ -0,0 +1,68 @@
> +#! /bin/sh
> +
> +# GDB script to create web ARI page directly from within gdb/ari directory.
> +#
> +# Copyright (C) 2012 Free Software Foundation, Inc.

Is this a new script?  If not, I believe the copyright notice should
include the previous years of existence.

> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.+
> +set -x
> +
> +# Determine directory of current script.
> +scriptpath=`dirname $0`

Since we are now putting the scripts in the main tree, which IMO is an
incentive for everyone to run them and check the results, I believe we
cannot always assume that certain programs are available at the user's
machine.  For this reason, maybe it's good to check if the executables
(like `dirname') being used in these sources actually exist?

> Index: contrib/ari/gdb_ari.sh
> ===================================================================
> RCS file: contrib/ari/gdb_ari.sh
> diff -N contrib/ari/gdb_ari.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/gdb_ari.sh	18 May 2012 22:31:42 -0000


> +awk -- '
> +BEGIN {

What do you think of creating a new file which would contain this giant
awk script?  I see there are many "sections" in this script, so maybe we
could even separate those script into logical files and use multiple `-f
FILE' arguments to awk.  But I guess only putting this huge script into
a separate file is enough for now...

Thanks,

-- 
Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFA-v2] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-25 19:47 ` Jan Kratochvil
@ 2012-05-26 12:41   ` Pierre Muller
  2012-05-27  4:06     ` Sergio Durigan Junior
  0 siblings, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-05-26 12:41 UTC (permalink / raw)
  To: 'Jan Kratochvil', 'Sergio Durigan Junior'; +Cc: gdb-patches

[-- Attachment #1: Type: text/plain, Size: 11712 bytes --]

> The patch is corrupted by line wrapping, 48 lines and some are not trivial
> to recover.
 Sorry,
I hope the attached patch will apply correctly.
I made small changes, one of them is to remove completely
the email sending part of the update-web-ari.sh script
as several person spoke up against it in its present form.

  Concerning the new create-web-ari-in-src.sh, 
this is indeed a new script (hence the 2012 copyright only)
and it is just a way to be able to generate the ARI index.html web 
page without any parameters.

  It basically only give default parameters
to update-web-ari.sh script, which requires four parameters.

  I hope this clarifies some of your questions.

  Concerning Sergio's suggestion to separate out the awk script into 
a separate file, I would like to minimize the changes relative to the
existing ss
cvs repository files.
  About the use of dirname, I think that
direname is like basename part of 
coreutils, and basename is already use several times
inside update-web-ari script in ss.
  I agree that being made public and thus available to 
many users, it would be nice to chack availability, and add a workaround,
but I have no 
precise how to do it, probably using a configure or Makefile could help here
.
Note that gdb directory cvonfigure script seems to contain both
dirname and basename...

  I hope you will be able to generate a ARI web page,
and give more feedbacks,


Pierre Muller
as unofficial ARI maintainer

The ChangeLog entry is unchanged:

2012-05-26  Pierre Muller  <muller@ics.u-strasbg.fr>

	* contrib/ari/create-web-ari-in-src.sh: New file.
	* contrib/ari/gdb_ari.sh: New file.
	* contrib/ari/gdb_find.sh: New file.
	* contrib/ari/update-web-ari.sh: New file.

The patch is in the attached file ari.patch

  To help to show what changed, here is the output of
diff -u -p -u ../../../ss ./contrib./ari
(../../../ss is the location of my ss checkout)

$ cat  diff-to-ss
only in ./contrib/ari: create-web-ari-in-src.sh
diff -b -u -p ../../ss/gdb_ari.sh ./contrib/ari/gdb_ari.sh
--- ../../ss/gdb_ari.sh 2012-05-26 13:59:56.744837000 +0200
+++ ./contrib/ari/gdb_ari.sh    2012-05-26 13:47:46.183454200 +0200
@@ -1,9 +1,31 @@
 #!/bin/sh

+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
 LANG=c ; export LANG
 LC_ALL=c ; export LC_ALL

-# Permenant checks take the form:
+# Permanent checks take the form:

 #     Do not use XXXX, ISO C 90 implies YYYY
 #     Do not use XXXX, instead use YYYY''.
@@ -564,7 +586,7 @@ Function name starts lower case but has
     editCase_full_line = $0
 }

-# Only function implemenation should be on first column
+# Only function implementation should be on first column
 BEGIN { doc["function call in first column"] = "\
 Function name in first column should be restricted to function
implementation"
     category["function call in first column"] = ari_code
@@ -676,15 +698,16 @@ FNR == 1 {
     }
 }

-BEGIN { doc["inline"] = "\
-Do not use the inline attribute; \
-since the compiler generally ignores this, better algorithm selection \
-is needed to improved performance"
-    category["inline"] = ari_code
-}
-/(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
-    fail("inline")
-}
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }

 # This test is obsolete as this type
 # has been deprecated and finally suppressed from GDB sources
Seulement dans ../../ss: gdb_ari.sh~
Seulement dans ../../ss: gdb_copyright.sh
Seulement dans ../../ss: gdb_find.log
diff -b -u -p ../../ss/gdb_find.sh ./contrib/ari/gdb_find.sh
--- ../../ss/gdb_find.sh        2011-03-21 23:52:35.465984900 +0100
+++ ./contrib/ari/gdb_find.sh   2012-05-26 13:47:46.183454200 +0200
@@ -1,5 +1,31 @@
 #!/bin/sh

+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
 # A find that prunes files that GDB users shouldn't be interested in.
 # Use sort to order files alphabetically.

Only in ./contrib/ari: update-web-ari.sh
Only in ../../ss: update-web-cvs-ari
This is because I choose to add a .sh suffix to update-web-ari script.

$ diff -b -u -p  ../../ss/update-web-ari  ./contrib/ari/update-web-ari.sh
--- ../../ss/update-web-ari     2011-03-15 17:38:23.893984500 +0100
+++ ./contrib/ari/update-web-ari.sh     2012-05-26 13:47:46.199054300 +0200
@@ -1,10 +1,25 @@
 #!/bin/sh -x

-# TODO: setjmp.h, setjmp and longjmp.
-
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.

-PATH=/bin:/usr/bin:/usr/local/bin:$HOME/bin
-export PATH
+# TODO: setjmp.h, setjmp and longjmp.

 # Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
 exec 3>&2 2>&1
@@ -18,7 +33,7 @@ ECHO ()
 # Really mindless usage
 if test $# -ne 4
 then
-    echo "Usage: $0 <snapshot> <tmpdir> <destdir> <project>" 1>&2
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
     exit 1
 fi
 snapshot=$1 ; shift
@@ -26,6 +41,13 @@ tmpdir=$1 ; shift
 wwwdir=$1 ; shift
 project=$1 ; shift

+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
 if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
 then
   echo ERROR: Can not write to directory ${wwwdir} >&2
@@ -56,7 +78,6 @@ check_werror_p=true
 update_doc_p=true
 update_web_p=true

-
 if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
 then
   AWK=awk
@@ -72,14 +93,25 @@ then
 fi


-# unpack the tar-ball
-if ${unpack_source_p}
-then
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
     # Was it previously unpacked?
     if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
     then
        /bin/rm -rf "${tmpdir}"
-       /bin/mkdir ${tmpdir}
+       /bin/mkdir -p ${tmpdir}
        if [ ! -d ${tmpdir} ]
        then
            echo "Problem creating work directory"
@@ -93,12 +125,16 @@ then
            * ) ECHO Bad file ${snapshot} ; exit 1 ;;
        esac | tar xf -
     fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
 fi
-module=`basename ${snapshot}`
-module=`basename ${module} .bz2`
-module=`basename ${module} .tar`
-srcdir=`echo ${tmpdir}/${module}*`
-version_in=${srcdir}/gdb/version.in
+
 if [ ! -r ${version_in} ]
 then
     echo ERROR: missing version file 1>&2
@@ -140,9 +176,9 @@ fi
 if ${check_indent_p} && test -d "${srcdir}"
 then
     printf "Analizing file indentation:" 1>&2
-    ( cd "${srcdir}" && /bin/sh $HOME/ss/gdb_find.sh ${project} | while
read f
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while
read f
     do
-       if /bin/sh $HOME/ss/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+       if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s -
${f}
        then
            :
        else
@@ -173,8 +209,8 @@ then
     node=`uname -n`
     echo "`date`: Using source lines ${srcf}" 1>&2
     echo "`date`: Checking source code" 1>&2
-    ( cd "${srcdir}" && /bin/sh $HOME/ss/gdb_find.sh "${project}" | \
-       xargs /bin/sh $HOME/ss/gdb_ari.sh -Werror -Wall --print-idx
--src=${srcf}
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+       xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx
--src=${srcf}
     ) > ${bugf}
     # Remove things we are not interested in to signal by email
     # gdbarch changes are not important here
@@ -213,24 +249,6 @@ then
        email_suffix="`date`"
     fi

-    if [ "${node}" = "sourceware.org" ]; then
-       warning_email=gdb-patches@sourceware.org
-    else
-       warning_email=muller@sourceware.org
-    fi
-
-    # Check if ${diff} is not empty
-    if [ -s ${diff} ]; then
-       # Send an email to muller@sourceware.org
-       mutt -s "New ARI warning ${email_suffix}" \
-           ${warning_email} < ${wwwdir}/mail-message
-    else
-      if [ -s ${wwwdir}/${mail-message} ]; then
-       # Send an email to muller@sourceware.org
-       mutt -s "ARI warning list change ${email_suffix}" \
-           muller@sourceware.org < ${wwwdir}/mail-message
-      fi
-    fi
 fi


@@ -363,7 +381,7 @@ then
     cd ${wwwdir}
     rm -f ari.doc ari.idx ari.doc.bug
     # Generate an extra file containing all the bugs that the ARI can
detect.
-    /bin/sh $HOME/ss/gdb_ari.sh -Werror -Wall --print-idx --print-doc >>
ari.doc.bug
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >>
ari.doc.bug
     cat ari.*.bug | $AWK > ari.idx '
 BEGIN {
     FS=": *"
@@ -701,7 +719,7 @@ END {


 # Make the scripts available
-cp $HOME/ss/gdb_*.sh ${wwwdir}
+cp ${aridir}/gdb_*.sh ${wwwdir}

 # Compute the ARI index - ratio of zero vs non-zero problems.
 indexes=`awk '



[-- Attachment #2: ari.patch --]
[-- Type: application/octet-stream, Size: 70667 bytes --]

? contrib/ari/patch
Index: contrib/ari/create-web-ari-in-src.sh
===================================================================
RCS file: contrib/ari/create-web-ari-in-src.sh
diff -N contrib/ari/create-web-ari-in-src.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/create-web-ari-in-src.sh	26 May 2012 11:44:44 -0000
@@ -0,0 +1,68 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -x
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+  webdir=~/htdocs/www/local/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
Index: contrib/ari/gdb_ari.sh
===================================================================
RCS file: contrib/ari/gdb_ari.sh
diff -N contrib/ari/gdb_ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_ari.sh	26 May 2012 11:44:44 -0000
@@ -0,0 +1,1347 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+awk -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement, suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME], bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug], supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug], supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug, "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ", only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\* ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux system'\'';\
+ comments should clearly differentiate between the two (this test assumes that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or `kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p = 0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:space:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly \
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param \
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|legacy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\2", 1, line)
+	line = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&& !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name = gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2", 1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR " \"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
Index: contrib/ari/gdb_find.sh
===================================================================
RCS file: contrib/ari/gdb_find.sh
diff -N contrib/ari/gdb_find.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_find.sh	26 May 2012 11:44:44 -0000
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
Index: contrib/ari/update-web-ari.sh
===================================================================
RCS file: contrib/ari/update-web-ari.sh
diff -N contrib/ari/update-web-ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/update-web-ari.sh	26 May 2012 11:44:44 -0000
@@ -0,0 +1,921 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    cp -f ${bugf} ${oldf}
+    cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+fi
+
+
+
+
+if ${check_doschk_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print "\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";  category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";                category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";        category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK > ${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/)) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >> ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`awk '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    print bugs / ( oks + legacy + deprecated )
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    cat ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v2] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-26 12:41   ` [RFA-v2] " Pierre Muller
@ 2012-05-27  4:06     ` Sergio Durigan Junior
  2012-05-27 19:53       ` Pierre Muller
  0 siblings, 1 reply; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-05-27  4:06 UTC (permalink / raw)
  To: Pierre Muller; +Cc: 'Jan Kratochvil', gdb-patches

Hi Pierre,

On Saturday, May 26 2012, Pierre Muller wrote:

>> The patch is corrupted by line wrapping, 48 lines and some are not trivial
>> to recover.
>  Sorry,
> I hope the attached patch will apply correctly.

It applies, but it creates a new directoy named src/ari, instead of
src/gdb/contrib/ari.  Not sure if it was intended, but it doesn't work
out of the box as I was expecting.  See below.

>   Concerning the new create-web-ari-in-src.sh, 
> this is indeed a new script (hence the 2012 copyright only)
> and it is just a way to be able to generate the ARI index.html web 
> page without any parameters.
>
>   It basically only give default parameters
> to update-web-ari.sh script, which requires four parameters.
>
>   I hope this clarifies some of your questions.

Yes, it does, thank you.

>   Concerning Sergio's suggestion to separate out the awk script into 
> a separate file, I would like to minimize the changes relative to the
> existing ss
> cvs repository files.

Hm, do you mean that you prefer to postpone this separation, or that you
don't intend to do it at all?

If the latter, I still think it's valid to do it because it will improve
the readability of the code, IMO.  But of course I won't push it if you
don't intend to do it.

>   About the use of dirname, I think that
> direname is like basename part of 
> coreutils, and basename is already use several times
> inside update-web-ari script in ss.

Yes, I agree, my only concern is that maybe some obscure system won't
have some of these binaries (like `awk', for example).

>   I agree that being made public and thus available to 
> many users, it would be nice to chack availability, and add a workaround,
> but I have no 
> precise how to do it, probably using a configure or Makefile could help here
> .

I was thinking more about a check in the shell script itself, no need
for Makefiles or configure options.

> Note that gdb directory cvonfigure script seems to contain both
> dirname and basename...

Yeah, good point.  Maybe I'm being too paranoid.

>   I hope you will be able to generate a ARI web page,
> and give more feedbacks,

I wasn't able to generate the webpage easily.  I had to do several fixes
in the scripts.  I am sending a new version of the patch which should
apply cleanly and create the proper directory under src/gdb/contrib.

I have taken the liberty to fix several errors that were not allowing me
to generate the web page correctly.  In order to test it, I was using
the following command:

   /bin/sh update-web-ari.sh ~/work/src/git/gdb-src /tmp/create-ari /tmp/webdir-ari gdb

i.e.,

   /bin/sh update-web-ari.sh <SRCDIR> <TMPDIR> <WEBDIR> <PROJECTNAME>

Note that I did not set the executable bit in any of the scripts below.
I have chosen to leave them as regular files, just like you did in your patch.

Thanks,

-- 
Sergio

diff --git a/gdb/contrib/ari/create-web-ari-in-src.sh b/gdb/contrib/ari/create-web-ari-in-src.sh
new file mode 100644
index 0000000..062cde5
--- /dev/null
+++ b/gdb/contrib/ari/create-web-ari-in-src.sh
@@ -0,0 +1,68 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -x
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+  webdir=~/htdocs/www/local/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+/bin/sh ${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
diff --git a/gdb/contrib/ari/gdb_ari.sh b/gdb/contrib/ari/gdb_ari.sh
new file mode 100644
index 0000000..f089026
--- /dev/null
+++ b/gdb/contrib/ari/gdb_ari.sh
@@ -0,0 +1,1347 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+awk -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement, suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME], bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug], supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug], supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug, "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ", only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\* ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux system'\'';\
+ comments should clearly differentiate between the two (this test assumes that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or `kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [[:digit:]]\.[[:digit:]]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p = 0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:space:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly \
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param \
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|legacy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\2", 1, line)
+	line = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&& !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name = gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2", 1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR " \"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
diff --git a/gdb/contrib/ari/gdb_find.sh b/gdb/contrib/ari/gdb_find.sh
new file mode 100644
index 0000000..9e4b67f
--- /dev/null
+++ b/gdb/contrib/ari/gdb_find.sh
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
diff --git a/gdb/contrib/ari/update-web-ari.sh b/gdb/contrib/ari/update-web-ari.sh
new file mode 100644
index 0000000..0ca52f2
--- /dev/null
+++ b/gdb/contrib/ari/update-web-ari.sh
@@ -0,0 +1,930 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+
+# Checking for `doschk' binary (if `check_doschk_p' is active)
+if [ "${check_doschk_p}" == "true" ] && doschk > /dev/null 2>&1
+then
+  have_doschk=true
+else
+  have_doschk=false
+fi
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/contrib/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    test -f ${bugf} && cp -f ${bugf} ${oldf}
+    test -f ${srcf} && cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+fi
+
+if ${check_doschk_p} && test "${have_doschk}" == "true" && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print "\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";  category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";                category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";        category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK > ${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/)) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >> ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`awk '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    if (oks + legacy + deprecated > 0) {
+        print bugs / ( oks + legacy + deprecated )
+    }
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    if ls ${wwwdir}/ari.*.doc > /dev/null 2>&1
+    then
+        cat ${wwwdir}/ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+    fi
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v2] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-27  4:06     ` Sergio Durigan Junior
@ 2012-05-27 19:53       ` Pierre Muller
  2012-05-27 22:03         ` Sergio Durigan Junior
  0 siblings, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-05-27 19:53 UTC (permalink / raw)
  To: 'Sergio Durigan Junior'; +Cc: 'Jan Kratochvil', gdb-patches



> -----Message d'origine-----
> De : gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Sergio Durigan Junior
> Envoyé : dimanche 27 mai 2012 06:06
> À : Pierre Muller
> Cc : 'Jan Kratochvil'; gdb-patches@sourceware.org
> Objet : Re: [RFA-v2] Add scripts to generate ARI web pages to
> gdb/contrib/ari directory
> 
> Hi Pierre,
> 
> On Saturday, May 26 2012, Pierre Muller wrote:
> 
> >> The patch is corrupted by line wrapping, 48 lines and some are not
> trivial
> >> to recover.
> >  Sorry,
> > I hope the attached patch will apply correctly.
> 
> It applies, but it creates a new directoy named src/ari, instead of
> src/gdb/contrib/ari.  Not sure if it was intended, but it doesn't work
> out of the box as I was expecting.  See below.

  This is strange indeed, but it seems to depend on the patch executable
that you use,
I just discovered, while trying to install my patch on a freebsd
machine, that on that system patch uses '-p0' option instead of '-p 0'
on Linux... Maybe you hit a similar problem.


 
> >   Concerning the new create-web-ari-in-src.sh,
> > this is indeed a new script (hence the 2012 copyright only)
> > and it is just a way to be able to generate the ARI index.html web
> > page without any parameters.
> >
> >   It basically only give default parameters
> > to update-web-ari.sh script, which requires four parameters.
> >
> >   I hope this clarifies some of your questions.
> 
> Yes, it does, thank you.
> 
> >   Concerning Sergio's suggestion to separate out the awk script into
> > a separate file, I would like to minimize the changes relative to the
> > existing ss
> > cvs repository files.
> 
> Hm, do you mean that you prefer to postpone this separation, or that you
> don't intend to do it at all?
> 
> If the latter, I still think it's valid to do it because it will improve
> the readability of the code, IMO.  But of course I won't push it if you
> don't intend to do it.

  I am not against the idea of separating it out, but I would like to
be careful to avoid problems if we later decide that the scripts
should be run in a build directory (especially if we call a regenerated
script
in the build dir using the source dir awk script...)
 
> >   About the use of dirname, I think that
> > direname is like basename part of
> > coreutils, and basename is already use several times
> > inside update-web-ari script in ss.
> 
> Yes, I agree, my only concern is that maybe some obscure system won't
> have some of these binaries (like `awk', for example).

  Of course generating the Awk Regression Index web page
without an installed awk binary will be difficult :)
 
> >   I agree that being made public and thus available to
> > many users, it would be nice to chack availability, and add a
workaround,
> > but I have no
> > precise how to do it, probably using a configure or Makefile could help
> here
> > .
> 
> I was thinking more about a check in the shell script itself, no need
> for Makefiles or configure options.

OK.
 
> > Note that gdb directory cvonfigure script seems to contain both
> > dirname and basename...
> 
> Yeah, good point.  Maybe I'm being too paranoid.
> 
> >   I hope you will be able to generate a ARI web page,
> > and give more feedbacks,
> 
> I wasn't able to generate the webpage easily.  I had to do several fixes
> in the scripts.  I am sending a new version of the patch which should
> apply cleanly and create the proper directory under src/gdb/contrib.

  Could you tell us on which system you tried?
 
> I have taken the liberty to fix several errors that were not allowing me
> to generate the web page correctly.  In order to test it, I was using
> the following command:
> 
>    /bin/sh update-web-ari.sh ~/work/src/git/gdb-src /tmp/create-ari
> /tmp/webdir-ari gdb
> 
> i.e.,
> 
>    /bin/sh update-web-ari.sh <SRCDIR> <TMPDIR> <WEBDIR> <PROJECTNAME>
> 
> Note that I did not set the executable bit in any of the scripts below.
> I have chosen to leave them as regular files, just like you did in your
> patch.

  The patch generated by
cvs diff -u -p -N 
doesn't contain any filemode information,
I don't know if this is possible with CVS directly,
but the files do have the executable permission set for user,
because this was suggested earlier in the emails.

  

  To discusss your changes more easily, I
generated a diff between my version and yours,
I will comment on this diffs below.


>>diff -u -p -r ari/create-web-ari-in-src.sh
sergio-ari/create-web-ari-in-src.sh
>>--- ari/create-web-ari-in-src.sh        2012-05-27 12:01:02.000000000
+0200
>>+++ sergio-ari/create-web-ari-in-src.sh 2012-05-27 11:59:26.000000000
+0200
>>@@ -58,7 +58,7 @@ if [ -z "${webdir}" ] ; then
>> fi
>>
>> # Launch update-web-ari.sh in same directory as current script.
>>-${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
>>+/bin/sh ${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir}
gdb
>>
>> if [ -f "${webdir}/index.html" ] ; then
>>   echo "ARI output can be viewed in file \"${webdir}/index.html\""
  This is not needed if we keep the exec permission for users
(or should it be for everyone?).



>>diff -u -p -r ari/gdb_ari.sh sergio-ari/gdb_ari.sh
>>--- ari/gdb_ari.sh      2012-05-27 12:01:02.000000000 +0200
>>+++ sergio-ari/gdb_ari.sh       2012-05-27 11:59:26.000000000 +0200
>>@@ -264,7 +264,7 @@ Do not use `Linux'\'', instead use `Linu
>> && !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
>> && !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
>> && !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
>>-&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
>>+&& !/(^|[^_[:alnum:]])Linux [[:digit:]]\.[[:digit:]]+)/ {
>>     fail("GNU/Linux")
>> }
>>

  This is one of the problems I saw, but wnted to fix only after
a first commit to have a minimal difference between ss and gdb/contrib/ari
 
>>diff -u -p -r ari/update-web-ari.sh sergio-ari/update-web-ari.sh
>>--- ari/update-web-ari.sh       2012-05-27 12:01:02.000000000 +0200
>>+++ sergio-ari/update-web-ari.sh        2012-05-27 11:59:26.000000000
+0200
>>@@ -85,6 +85,13 @@ else
>>   AWK=gawk
>> fi
>>
>>+# Checking for `doschk' binary (if `check_doschk_p' is active)
>>+if [ "${check_doschk_p}" == "true" ] && doschk > /dev/null 2>&1
>>+then
>>+  have_doschk=true
>>+else
>>+  have_doschk=false
>>+fi
>>
>> # Set up a few cleanups
>> if ${delete_source_p}

  I am not sure this is correct:
calling doschk without arguments 
caused the program (if available)
to wait for input to analyze instead of analyzing files
given as parameters.
  Was your intent to test if the binary is available?
I think we must do this without calling it.
  Currently, id doschk binary doesn't exist,
line 302 of update-web-ari.sh will create an empty
doschk_out file (with some error probably generated by the shell)
but this is not a problem for me at least on cygwin or Linux

>>@@ -99,7 +106,7 @@ if [ -d ${snapshot} ]
>> then
>>   module=${project}
>>   srcdir=${snapshot}
>>-  aridir=${srcdir}/${module}/ari
>>+  aridir=${srcdir}/${module}/contrib/ari
>>   unpack_source_p=false
>>   delete_source_p=false
>>   version_in=${srcdir}/${module}/version.in
>>

   This is indeed a required change that 
I forgot, sorry about that one.

>>@@ -203,8 +210,8 @@ then
>>     oldpruned=${oldf1}-pruned
>>     newpruned=${newf1}-pruned
>>
>>-    cp -f ${bugf} ${oldf}
>>-    cp -f ${srcf} ${oldsrcf}
>>+    test -f ${bugf} && cp -f ${bugf} ${oldf}
>>+    test -f ${srcf} && cp -f ${srcf} ${oldsrcf}
>>     rm -f ${srcf}
>>     node=`uname -n`
>>     echo "`date`: Using source lines ${srcf}" 1>&2

  OK, I am still a bit weak on some
shell basics, this means
use 'cp' only if test -f ${file}
returns zero, i.e. if the file exists.
   Is it really a problem to have a failing cp call here?

>>@@ -251,10 +258,7 @@ then
>>
>> fi
>>
>>-
>>-
>>-
>>-if ${check_doschk_p} && test -d "${srcdir}"
>>+if ${check_doschk_p} && test "${have_doschk}" == "true" && test -d
"${srcdir}"
>> then
>>     echo "`date`: Checking for doschk" 1>&2
>>     rm -f "${wwwdir}"/ari.doschk.*
  Would 
'if ${check_doschk_p} && ${have_doschk} && test -d ${srcdir}'
be also correct here?
  
>>@@ -744,7 +748,9 @@ END {
>>     #print "tests/ok:", nr / ok
>>     #print "bugs/tests:", bugs / nr
>>     #print "bugs/ok:", bugs / ok
>>-    print bugs / ( oks + legacy + deprecated )
>>+    if (oks + legacy + deprecated > 0) {
>>+        print bugs / ( oks + legacy + deprecated )
>>+    }
>> }
>> ' ${wwwdir}/ari.doc`
>>
  This is nice, but once again, I would prefer to 
add this only after an initial commit.

>>@@ -860,7 +866,9 @@ EOF
>>     for a in $all; do
>>        alls="$alls all[$a] = 1 ;"
>>     done
>>-    cat ari.*.doc | $AWK >> ${newari} '
>>+    if ls ${wwwdir}/ari.*.doc > /dev/null 2>&1
>>+    then
>>+        cat ${wwwdir}/ari.*.doc | $AWK >> ${newari} '
>> BEGIN {
>>     FS = ":"
>>     '"$alls"'
>>@@ -876,6 +884,7 @@ BEGIN {
>>     }
>> }
>> '
>>+    fi
>>
>>     cat >> ${newari} <<EOF
>> <center>
 
  Here also, your change improve the script quality,
and I would be glad to incorporate it later.

  One more problem I encountered on FreeBSD
is that 'awk' and 'gawk' are not completely equivalent and
gawk seems to generate output while
freebsd awk fails...

Pierre 



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v2] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-27 19:53       ` Pierre Muller
@ 2012-05-27 22:03         ` Sergio Durigan Junior
  2012-05-28 18:34           ` [RFA-v3] " Pierre Muller
  0 siblings, 1 reply; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-05-27 22:03 UTC (permalink / raw)
  To: Pierre Muller; +Cc: 'Jan Kratochvil', gdb-patches

On Sunday, May 27 2012, Pierre Muller wrote:

>> >> The patch is corrupted by line wrapping, 48 lines and some are not
>> trivial
>> >> to recover.
>> >  Sorry,
>> > I hope the attached patch will apply correctly.
>> 
>> It applies, but it creates a new directoy named src/ari, instead of
>> src/gdb/contrib/ari.  Not sure if it was intended, but it doesn't work
>> out of the box as I was expecting.  See below.
>
>   This is strange indeed, but it seems to depend on the patch executable
> that you use,
> I just discovered, while trying to install my patch on a freebsd
> machine, that on that system patch uses '-p0' option instead of '-p 0'
> on Linux... Maybe you hit a similar problem.

What I did was just `quilt import /path/to/ari.patch' and `quilt push',
both on top of src/ directory (not src/gdb).

>> >   Concerning Sergio's suggestion to separate out the awk script into
>> > a separate file, I would like to minimize the changes relative to the
>> > existing ss
>> > cvs repository files.
>> 
>> Hm, do you mean that you prefer to postpone this separation, or that you
>> don't intend to do it at all?
>> 
>> If the latter, I still think it's valid to do it because it will improve
>> the readability of the code, IMO.  But of course I won't push it if you
>> don't intend to do it.
>
>   I am not against the idea of separating it out, but I would like to
> be careful to avoid problems if we later decide that the scripts
> should be run in a build directory (especially if we call a regenerated
> script
> in the build dir using the source dir awk script...)

I cannot see why separating the scripts could lead to any problems in
this case you mention.  My idea was just to make separated .awk files
which would be called using `$AWK -f script1.awk...' and so on.

>> > Note that gdb directory cvonfigure script seems to contain both
>> > dirname and basename...
>> 
>> Yeah, good point.  Maybe I'm being too paranoid.
>> 
>> >   I hope you will be able to generate a ARI web page,
>> > and give more feedbacks,
>> 
>> I wasn't able to generate the webpage easily.  I had to do several fixes
>> in the scripts.  I am sending a new version of the patch which should
>> apply cleanly and create the proper directory under src/gdb/contrib.
>
>   Could you tell us on which system you tried?

Fedora 16 x86_64.

>   To discusss your changes more easily, I
> generated a diff between my version and yours,
> I will comment on this diffs below.

Thanks.

>>>diff -u -p -r ari/create-web-ari-in-src.sh
> sergio-ari/create-web-ari-in-src.sh
>>>--- ari/create-web-ari-in-src.sh        2012-05-27 12:01:02.000000000
> +0200
>>>+++ sergio-ari/create-web-ari-in-src.sh 2012-05-27 11:59:26.000000000
> +0200
>>>@@ -58,7 +58,7 @@ if [ -z "${webdir}" ] ; then
>>> fi
>>>
>>> # Launch update-web-ari.sh in same directory as current script.
>>>-${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
>>>+/bin/sh ${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir}
> gdb
>>>
>>> if [ -f "${webdir}/index.html" ] ; then
>>>   echo "ARI output can be viewed in file \"${webdir}/index.html\""
>   This is not needed if we keep the exec permission for users
> (or should it be for everyone?).

Well, I agree it's not needed if `update-web-ari.sh' has the executable
permission set for the user.  However, I think it's not harmful to leave
the `/bin/sh' there, since the file is interpreted anyway.

>>>diff -u -p -r ari/gdb_ari.sh sergio-ari/gdb_ari.sh
>>>--- ari/gdb_ari.sh      2012-05-27 12:01:02.000000000 +0200
>>>+++ sergio-ari/gdb_ari.sh       2012-05-27 11:59:26.000000000 +0200
>>>@@ -264,7 +264,7 @@ Do not use `Linux'\'', instead use `Linu
>>> && !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
>>> && !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
>>> && !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
>>>-&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
>>>+&& !/(^|[^_[:alnum:]])Linux [[:digit:]]\.[[:digit:]]+)/ {
>>>     fail("GNU/Linux")
>>> }
>>>
>
>   This is one of the problems I saw, but wnted to fix only after
> a first commit to have a minimal difference between ss and
> gdb/contrib/ari

Oh, OK, as I said, I just fixed those problems because I saw those
warnings and they bothered me.  I can easily send a patch to fix those
minor issues later, when you have the code checked in.

>  
>>>diff -u -p -r ari/update-web-ari.sh sergio-ari/update-web-ari.sh
>>>--- ari/update-web-ari.sh       2012-05-27 12:01:02.000000000 +0200
>>>+++ sergio-ari/update-web-ari.sh        2012-05-27 11:59:26.000000000
> +0200
>>>@@ -85,6 +85,13 @@ else
>>>   AWK=gawk
>>> fi
>>>
>>>+# Checking for `doschk' binary (if `check_doschk_p' is active)
>>>+if [ "${check_doschk_p}" == "true" ] && doschk > /dev/null 2>&1
>>>+then
>>>+  have_doschk=true
>>>+else
>>>+  have_doschk=false
>>>+fi
>>>
>>> # Set up a few cleanups
>>> if ${delete_source_p}
>
>   I am not sure this is correct:
> calling doschk without arguments 
> caused the program (if available)
> to wait for input to analyze instead of analyzing files
> given as parameters.
>   Was your intent to test if the binary is available?
> I think we must do this without calling it.
>   Currently, id doschk binary doesn't exist,
> line 302 of update-web-ari.sh will create an empty
> doschk_out file (with some error probably generated by the shell)
> but this is not a problem for me at least on cygwin or Linux

Sorry, I did not know `doschk' waited for input, I don't have it
installed here and I should have investigated more.  My intention was to
check for its existence, yeah.  I think this check can be replaced by
something like:

    if which doschk > /dev/null 2>&1
    then
        ....
    fi

I was just trying to avoid another annoying warning that I was seeing
while generating the webpage.

>>>@@ -99,7 +106,7 @@ if [ -d ${snapshot} ]
>>> then
>>>   module=${project}
>>>   srcdir=${snapshot}
>>>-  aridir=${srcdir}/${module}/ari
>>>+  aridir=${srcdir}/${module}/contrib/ari
>>>   unpack_source_p=false
>>>   delete_source_p=false
>>>   version_in=${srcdir}/${module}/version.in
>>>
>
>    This is indeed a required change that 
> I forgot, sorry about that one.

AFAIR this is the main reason I couldn't generate the webpage at first
attempt.

>>>@@ -203,8 +210,8 @@ then
>>>     oldpruned=${oldf1}-pruned
>>>     newpruned=${newf1}-pruned
>>>
>>>-    cp -f ${bugf} ${oldf}
>>>-    cp -f ${srcf} ${oldsrcf}
>>>+    test -f ${bugf} && cp -f ${bugf} ${oldf}
>>>+    test -f ${srcf} && cp -f ${srcf} ${oldsrcf}
>>>     rm -f ${srcf}
>>>     node=`uname -n`
>>>     echo "`date`: Using source lines ${srcf}" 1>&2
>
>   OK, I am still a bit weak on some
> shell basics, this means
> use 'cp' only if test -f ${file}
> returns zero, i.e. if the file exists.
>    Is it really a problem to have a failing cp call here?

Yes, the test means exactly that.

It is not a strict problem if `cp' fails, but again, I think useless
warning messages should be avoided for the sake of clarity of the
output.  The more useless warning we generate, the less attention the
user will pay to the really important messages.

>>>@@ -251,10 +258,7 @@ then
>>>
>>> fi
>>>
>>>-
>>>-
>>>-
>>>-if ${check_doschk_p} && test -d "${srcdir}"
>>>+if ${check_doschk_p} && test "${have_doschk}" == "true" && test -d
> "${srcdir}"
>>> then
>>>     echo "`date`: Checking for doschk" 1>&2
>>>     rm -f "${wwwdir}"/ari.doschk.*
>   Would 
> 'if ${check_doschk_p} && ${have_doschk} && test -d ${srcdir}'
> be also correct here?

Not with the code I wrote to check `doschk', because it sets
`${have_doschk}' anyway (to either `true' or `false').  You would have
to change the code to:

    if which doschk > /dev/null 2>&1
    then
        have_doschk=true
    fi

(i.e., remove the `else' command).  If you do that, you will be able to
use the `if' as you proposed, because `${have_doschk}' will only be set
when `doschk' exists.

However, I am not used to see this kind of construction in shell
scripts.  Maybe this can be another point of future improvement in the patch.

>>>@@ -860,7 +866,9 @@ EOF
>>>     for a in $all; do
>>>        alls="$alls all[$a] = 1 ;"
>>>     done
>>>-    cat ari.*.doc | $AWK >> ${newari} '
>>>+    if ls ${wwwdir}/ari.*.doc > /dev/null 2>&1
>>>+    then
>>>+        cat ${wwwdir}/ari.*.doc | $AWK >> ${newari} '
>>> BEGIN {
>>>     FS = ":"
>>>     '"$alls"'
>>>@@ -876,6 +884,7 @@ BEGIN {
>>>     }
>>> }
>>> '
>>>+    fi
>>>
>>>     cat >> ${newari} <<EOF
>>> <center>
>  
>   Here also, your change improve the script quality,
> and I would be glad to incorporate it later.

Great.  Thanks for the comments; as I said earlier, I can easily send a
patch after you have checked in the first version of ARI.

>   One more problem I encountered on FreeBSD
> is that 'awk' and 'gawk' are not completely equivalent and
> gawk seems to generate output while
> freebsd awk fails...

Maybe we'll have to stick to `gawk' then?  This is one of the reasons I
proposed a check for the necessary binaries.

Thanks,

-- 
Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-27 22:03         ` Sergio Durigan Junior
@ 2012-05-28 18:34           ` Pierre Muller
  2012-05-28 18:38             ` Pierre Muller
  2012-05-29 13:02             ` Joel Brobecker
  0 siblings, 2 replies; 32+ messages in thread
From: Pierre Muller @ 2012-05-28 18:34 UTC (permalink / raw)
  To: gdb-patches; +Cc: 'Jan Kratochvil', 'Sergio Durigan Junior'

[-- Attachment #1: Type: text/plain, Size: 973 bytes --]


   Thanks to Sergio for his remarks,
in this new version, I only added the changes
which are really required in order to be able to run
the script.

  The patch should apply cleanly using
patch -p 0 -I ari.patch at 
src/gdb directory level and create new scripts in
gdb/contrib./ari directory.

  The two changes I included are:
1) Fix aridir location in update-web-ari.sh script.
2) export AWK in update-web-ari.sh script and use this variable
inside gdb_ari.sh.

As explained in my answer to Sergio, I would like to 
leave other fixes to after initial commit to 
have a better history of changes relative to the
ss directory version on sourceware.org server.


Pierre Muller

PS: could contrib get a separate ChangeLog file?


ChangeLog entry:

2012-05-28  Pierre Muller  <muller@ics.u-strasbg.fr>

	* contrib/ari/create-web-ari-in-src.sh: New file.
	* contrib/ari/gdb_ari.sh: New file.
	* contrib/ari/gdb_find.sh: New file.
	* contrib/ari/update-web-ari.sh: New file.


[-- Attachment #2: ari.patch --]
[-- Type: application/octet-stream, Size: 70740 bytes --]

? contrib/ari/patch
Index: contrib/ari/create-web-ari-in-src.sh
===================================================================
RCS file: contrib/ari/create-web-ari-in-src.sh
diff -N contrib/ari/create-web-ari-in-src.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/create-web-ari-in-src.sh	28 May 2012 18:17:56 -0000
@@ -0,0 +1,68 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -x
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+  webdir=~/htdocs/www/local/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
Index: contrib/ari/gdb_ari.sh
===================================================================
RCS file: contrib/ari/gdb_ari.sh
diff -N contrib/ari/gdb_ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_ari.sh	28 May 2012 18:17:57 -0000
@@ -0,0 +1,1351 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+if [ "$AWK" == "" ] ; then
+  AWK=awk
+fi
+
+${AWK} -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement, suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME], bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug], supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug], supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug, "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ", only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\* ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux system'\'';\
+ comments should clearly differentiate between the two (this test assumes that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or `kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p = 0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:space:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly \
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param \
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|legacy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\2", 1, line)
+	line = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&& !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name = gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2", 1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR " \"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
Index: contrib/ari/gdb_find.sh
===================================================================
RCS file: contrib/ari/gdb_find.sh
diff -N contrib/ari/gdb_find.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/gdb_find.sh	28 May 2012 18:17:57 -0000
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
Index: contrib/ari/update-web-ari.sh
===================================================================
RCS file: contrib/ari/update-web-ari.sh
diff -N contrib/ari/update-web-ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ contrib/ari/update-web-ari.sh	28 May 2012 18:17:57 -0000
@@ -0,0 +1,921 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+export AWK
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/contrib/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    cp -f ${bugf} ${oldf}
+    cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+fi
+
+
+
+
+if ${check_doschk_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print "\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";  category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";                category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";        category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK > ${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/)) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >> ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`${AWK} '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    print bugs / ( oks + legacy + deprecated )
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    cat ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-28 18:34           ` [RFA-v3] " Pierre Muller
@ 2012-05-28 18:38             ` Pierre Muller
  2012-05-29 13:02             ` Joel Brobecker
  1 sibling, 0 replies; 32+ messages in thread
From: Pierre Muller @ 2012-05-28 18:38 UTC (permalink / raw)
  To: gdb-patches

>   The patch should apply cleanly using
> patch -p 0 -I ari.patch at
> src/gdb directory level and create new scripts in
> gdb/contrib./ari directory.

  I knew I was forgetting something, but it only comes back when you hit the
'Send' button....

Here it is: 
as Sergio noted, the patch doesn't contain any filemode specification.
I only use CVS which doesn't seem to care when creating a patch...

This means that you need to use
chmod u+x contrib./ari/*.sh

in order to be able to create a web page by simply calling
./contrib./ari/create-web-ari-in-src.sh

Sorry about this,

Pierre


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-28 18:34           ` [RFA-v3] " Pierre Muller
  2012-05-28 18:38             ` Pierre Muller
@ 2012-05-29 13:02             ` Joel Brobecker
  2012-05-29 13:13               ` Pedro Alves
  1 sibling, 1 reply; 32+ messages in thread
From: Joel Brobecker @ 2012-05-29 13:02 UTC (permalink / raw)
  To: Pierre Muller
  Cc: gdb-patches, 'Jan Kratochvil', 'Sergio Durigan Junior'

Hi Pierre,

> As explained in my answer to Sergio, I would like to leave other fixes
> to after initial commit to have a better history of changes relative
> to the ss directory version on sourceware.org server.

That would be fine. I think that's a good reason.

> PS: could contrib get a separate ChangeLog file?

I think we are trying to get as few ChangeLog files as possible.
Any reason why you'd like to have your own ChangeLog file? It does
not seem unreasonable since the changes in contrib/ari as probably
expected to be few and far between once it's stabilized, and
having them listed in the main ChangeLog might make it harder to
have a quick list of the changes made in that directory over time...

> 2012-05-28  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
> 	* contrib/ari/create-web-ari-in-src.sh: New file.
> 	* contrib/ari/gdb_ari.sh: New file.
> 	* contrib/ari/gdb_find.sh: New file.
> 	* contrib/ari/update-web-ari.sh: New file.


> ? contrib/ari/patch
> Index: contrib/ari/create-web-ari-in-src.sh
> ===================================================================
> RCS file: contrib/ari/create-web-ari-in-src.sh
> diff -N contrib/ari/create-web-ari-in-src.sh
> --- /dev/null	1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/create-web-ari-in-src.sh	28 May 2012 18:17:56 -0000
> @@ -0,0 +1,68 @@
> +#! /bin/sh
> +
> +# GDB script to create web ARI page directly from within gdb/ari directory.
                                                           ^^^^^^^^^

gdb/contrib/ari ?

> +set -x

We'll probably want to remove that, at some point, and remove all
the associated "set +x" in the other scripts.

> +# update-web-ari.sh script wants four parameters
> +# 1: directory of checkout src or gdb-RELEASE for release sources.
> +# 2: a temp directory.
> +# 3: a directory for generated web page.
> +# 4: The name of the current package, must be gdb here.
> +# Here we provide default values for these 4 parameters

Are the parameters passed via the environment?

> +# Default location of generate index.hmtl web page.
> +if [ -z "${webdir}" ] ; then
> +  webdir=~/htdocs/www/local/ari
> +fi

Default location should be current working directory, IMO.

> +# Launch update-web-ari.sh in same directory as current script.
> +${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb

Same as "set -x", we'll probably want to get rid of the final parameter
eventually, since we know it'll always be "gdb" for us.

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-29 13:02             ` Joel Brobecker
@ 2012-05-29 13:13               ` Pedro Alves
  2012-05-31  6:56                 ` Pierre Muller
                                   ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Pedro Alves @ 2012-05-29 13:13 UTC (permalink / raw)
  To: Joel Brobecker
  Cc: Pierre Muller, gdb-patches, 'Jan Kratochvil',
	'Sergio Durigan Junior'

On 05/29/2012 02:01 PM, Joel Brobecker wrote:

> Hi Pierre,
> 
>> > As explained in my answer to Sergio, I would like to leave other fixes
>> > to after initial commit to have a better history of changes relative
>> > to the ss directory version on sourceware.org server.
> That would be fine. I think that's a good reason.
> 
>> > PS: could contrib get a separate ChangeLog file?
> I think we are trying to get as few ChangeLog files as possible.


I think it should.  "contrib" is by definition a space for third party
contributed sources that we ship along, ergo not really part of GDB,
unlike the cli, mi, or tui, regformats, etc. subdirectories.

-- 
Pedro Alves


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-29 13:13               ` Pedro Alves
@ 2012-05-31  6:56                 ` Pierre Muller
  2012-05-31 15:59                   ` Joel Brobecker
  2012-06-14 12:36                 ` [RFA-v4] " Pierre Muller
  2012-06-22 16:10                 ` [RFA-v3] " Tom Tromey
  2 siblings, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-05-31  6:56 UTC (permalink / raw)
  To: 'Pedro Alves', 'Joel Brobecker'
  Cc: gdb-patches, 'Jan Kratochvil', 'Sergio Durigan Junior'



> -----Message d'origine-----
> De : gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Pedro Alves
> Envoyé : mardi 29 mai 2012 15:12
> À : Joel Brobecker
> Cc : Pierre Muller; gdb-patches@sourceware.org; 'Jan Kratochvil'; 'Sergio
> Durigan Junior'
> Objet : Re: [RFA-v3] Add scripts to generate ARI web pages to
> gdb/contrib/ari directory
> 
> On 05/29/2012 02:01 PM, Joel Brobecker wrote:
> 
> > Hi Pierre,
> >
> >> > As explained in my answer to Sergio, I would like to leave other
fixes
> >> > to after initial commit to have a better history of changes relative
> >> > to the ss directory version on sourceware.org server.
> > That would be fine. I think that's a good reason.
> >
> >> > PS: could contrib get a separate ChangeLog file?
> > I think we are trying to get as few ChangeLog files as possible.
> 
> 
> I think it should.  "contrib" is by definition a space for third party
> contributed sources that we ship along, ergo not really part of GDB,
> unlike the cli, mi, or tui, regformats, etc. subdirectories.

  Could you tell me how I should proceed here:

 I would suggest the following:

gdb/ChangeLog entry:

2012-05-31  Pierre Muller  <muller@ics.u-strasbg.fr>

	Include ARI web page generation scripts into GDB CVS repository.
	* contrib/ari/: New directory.
	* contrib./ChangeLog: New file.
	Further ARI related commit will be documented inside
	that new ChangeLog file.

gdb/contrib/ChangeLog entry:

2012-05-31  Pierre Muller  <muller@ics.u-strasbg.fr>

	* ari/create-web-ari-in-src.sh: New file.
	* ari/gdb_ari.sh: New file.
	* ari/gdb_find.sh: New file.
	* ari/update-web-ari.sh: New file.

But I don't know if this is acceptable...

Pierre


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-31  6:56                 ` Pierre Muller
@ 2012-05-31 15:59                   ` Joel Brobecker
  0 siblings, 0 replies; 32+ messages in thread
From: Joel Brobecker @ 2012-05-31 15:59 UTC (permalink / raw)
  To: Pierre Muller
  Cc: 'Pedro Alves', gdb-patches, 'Jan Kratochvil',
	'Sergio Durigan Junior'

>   Could you tell me how I should proceed here:
> 
>  I would suggest the following:
> 
> gdb/ChangeLog entry:
> 
> 2012-05-31  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
> 	Include ARI web page generation scripts into GDB CVS repository.
> 	* contrib/ari/: New directory.
> 	* contrib./ChangeLog: New file.
> 	Further ARI related commit will be documented inside
> 	that new ChangeLog file.

Personally, I don't think that the entry in gdb/ChangeLog is
necessary. But I wouldn't object either.

> gdb/contrib/ChangeLog entry:
> 
> 2012-05-31  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
> 	* ari/create-web-ari-in-src.sh: New file.
> 	* ari/gdb_ari.sh: New file.
> 	* ari/gdb_find.sh: New file.
> 	* ari/update-web-ari.sh: New file.

I think that this entry alone in the new ChangeLog should be
sufficient.

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFA-v4] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-29 13:13               ` Pedro Alves
  2012-05-31  6:56                 ` Pierre Muller
@ 2012-06-14 12:36                 ` Pierre Muller
  2012-06-14 16:02                   ` Joel Brobecker
  2012-09-26 22:15                   ` [RFA-v5] " Pierre Muller
  2012-06-22 16:10                 ` [RFA-v3] " Tom Tromey
  2 siblings, 2 replies; 32+ messages in thread
From: Pierre Muller @ 2012-06-14 12:36 UTC (permalink / raw)
  To: 'Pedro Alves', 'Joel Brobecker'
  Cc: gdb-patches, 'Jan Kratochvil', 'Sergio Durigan Junior'

[-- Attachment #1: Type: text/plain, Size: 660 bytes --]

  Here is a new version of my patch to
insert ARI web page generation script into
GDB sources.

  As requested I removed the references to emails
for sourceware.
  Otherwise I also made a small change to 
generate the web page in a subdir
called trunk/ari
if no tag in found in CVS subdirectory
and branch/ari otherwise.

gdb/contrib./ChangeLog entry (New file)

2012-06-14  Pierre Muller  <muller@ics.u-strasbg.fr>

        Incorporate ARI web page generator into GDB sources.
        * ari/create-web-ari-in-src.sh: New file.
        * ari/gdb_ari.sh: New file.
        * ari/gdb_find.sh: New file.
        * ari/update-web-ari.sh: New file.


Pierre Muller

[-- Attachment #2: contrib-ari.patch --]
[-- Type: application/octet-stream, Size: 71376 bytes --]

projecttype:gdb
revision:HEAD
email:muller@ics.u-strasbg.fr

2012-06-14  Pierre Muller  <muller@ics.u-strasbg.fr>

	Incorporate ARI web page generator into GDB sources.
	* ari/create-web-ari-in-src.sh: New file.
	* ari/gdb_ari.sh: New file.
	* ari/gdb_find.sh: New file.
	* ari/update-web-ari.sh: New file.


Index: src/gdb/contrib/ari/create-web-ari-in-src.sh
===================================================================
RCS file: contrib/ari/create-web-ari-in-src.sh
diff -N contrib/ari/create-web-ari-in-src.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ src/gdb/contrib/ari/create-web-ari-in-src.sh	14 Jun 2012 10:20:54 -0000
@@ -0,0 +1,77 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+# Use 'branch' subdir name if Tag contains branch
+  if [ -f "${srcdir}/gdb/CVS/Tag" ] ; then
+    tagname=`cat "${srcdir}/gdb/CVS/Tag"`
+  else
+    tagname=trunk
+  fi
+  if [ "${tagname#branch}" != "${tagname}" ] ; then
+    subdir=branch
+  else
+    subdir=trunk
+  fi
+  webdir=`pwd`/${subdir}/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
Index: src/gdb/contrib/ari/gdb_ari.sh
===================================================================
RCS file: contrib/ari/gdb_ari.sh
diff -N contrib/ari/gdb_ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ src/gdb/contrib/ari/gdb_ari.sh	14 Jun 2012 10:20:54 -0000
@@ -0,0 +1,1351 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+if [ "$AWK" == "" ] ; then
+  AWK=awk
+fi
+
+${AWK} -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement, suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME], bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug], supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug], supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug, "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ", only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\* ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux system'\'';\
+ comments should clearly differentiate between the two (this test assumes that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or `kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p = 0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:space:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly \
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param \
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|legacy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\2", 1, line)
+	line = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&& !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name = gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2", 1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR " \"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
Index: src/gdb/contrib/ari/gdb_find.sh
===================================================================
RCS file: contrib/ari/gdb_find.sh
diff -N contrib/ari/gdb_find.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ src/gdb/contrib/ari/gdb_find.sh	14 Jun 2012 10:20:54 -0000
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
Index: src/gdb/contrib/ari/update-web-ari.sh
===================================================================
RCS file: contrib/ari/update-web-ari.sh
diff -N contrib/ari/update-web-ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ src/gdb/contrib/ari/update-web-ari.sh	14 Jun 2012 10:20:54 -0000
@@ -0,0 +1,921 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+export AWK
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/contrib/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    cp -f ${bugf} ${oldf}
+    cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+fi
+
+
+
+
+if ${check_doschk_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print "\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";  category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";                category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";        category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK > ${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/)) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >> ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`${AWK} '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    print bugs / ( oks + legacy + deprecated )
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    cat ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v4] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-06-14 12:36                 ` [RFA-v4] " Pierre Muller
@ 2012-06-14 16:02                   ` Joel Brobecker
  2012-06-14 16:14                     ` Pierre Muller
  2012-09-26 22:15                   ` [RFA-v5] " Pierre Muller
  1 sibling, 1 reply; 32+ messages in thread
From: Joel Brobecker @ 2012-06-14 16:02 UTC (permalink / raw)
  To: Pierre Muller
  Cc: 'Pedro Alves', gdb-patches, 'Jan Kratochvil',
	'Sergio Durigan Junior'

On Thu, Jun 14, 2012 at 02:35:23PM +0200, Pierre Muller wrote:
>   Here is a new version of my patch to insert ARI web page generation
>   script into GDB sources.

Thank you!

> As requested I removed the references to emails
> for sourceware.

Nice.

> Otherwise I also made a small change to generate the web page in a
> subdir called trunk/ari if no tag in found in CVS subdirectory and
> branch/ari otherwise.

For this one, I do not think that this is a good idea. Not everyone
works from a CVS sandbox anymore, so the assumption is going to break
for those. But also, I don't see why we would need to produce the data
in a different directory based on the type of sources.

We can let the caller decide where he wants the data to go...

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v4] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-06-14 16:02                   ` Joel Brobecker
@ 2012-06-14 16:14                     ` Pierre Muller
  2012-06-14 16:22                       ` Joel Brobecker
  0 siblings, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-06-14 16:14 UTC (permalink / raw)
  To: 'Joel Brobecker'
  Cc: 'Pedro Alves', gdb-patches, 'Jan Kratochvil',
	'Sergio Durigan Junior'

> >   Here is a new version of my patch to insert ARI web page generation
> >   script into GDB sources.
> 
> Thank you!
> 
> > As requested I removed the references to emails
> > for sourceware.
> 
> Nice.
> 
> > Otherwise I also made a small change to generate the web page in a
> > subdir called trunk/ari if no tag in found in CVS subdirectory and
> > branch/ari otherwise.
> 
> For this one, I do not think that this is a good idea. Not everyone
> works from a CVS sandbox anymore, so the assumption is going to break
> for those. But also, I don't see why we would need to produce the data
> in a different directory based on the type of sources.

  This is only because currently there are web links
inside the generated pages that should allow to switch from
trunk to branches (It would at least be useful that those link
still work when the script is called on the sourceware.org repository).
 
> We can let the caller decide where he wants the data to go...

  OK, this is already possible,
either by specifying a webdir environment variable
before calling create-web-ari-in-src.sh script
or by using directly
update-web-ari.sh script, you then need to specify all 4 args 
as explained in start of create-web-ari-in-src.sh script.

  I would just prefer the we do use another directory or a sub-directory,
because there is a lot of file generation, and copying of scripts
to destination dir.
  If webdir is current directory, it will be very hard to clean the
mess afterwards...

Pierre


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v4] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-06-14 16:14                     ` Pierre Muller
@ 2012-06-14 16:22                       ` Joel Brobecker
  2012-08-21 10:27                         ` About RFA for " Pierre Muller
       [not found]                         ` <50336283.a2db440a.600c.105dSMTPIN_ADDED@mx.google.com>
  0 siblings, 2 replies; 32+ messages in thread
From: Joel Brobecker @ 2012-06-14 16:22 UTC (permalink / raw)
  To: Pierre Muller
  Cc: 'Pedro Alves', gdb-patches, 'Jan Kratochvil',
	'Sergio Durigan Junior'

> This is only because currently there are web links
> inside the generated pages that should allow to switch from
> trunk to branches (It would at least be useful that those link
> still work when the script is called on the sourceware.org repository).
>  
> > We can let the caller decide where he wants the data to go...
> 
> OK, this is already possible, either by specifying a webdir
> environment variable before calling create-web-ari-in-src.sh script
> or by using directly update-web-ari.sh script, you then need to
> specify all 4 args as explained in start of create-web-ari-in-src.sh
> script.

I have a feeling that you are making things harder for yourself.
From my point of view, all we need is a script that generates
the data at a specified location.  We can then adapt the scripts
in the 'ss' repository to call them with the correct arguments.

I think that the knowledge about the layout of our web site in
update-web-ari.sh should remain in the 'ss' repository. And we can
then let it deal with the various idiosynchrasies of our web site.
Worse case scenario, I'll modify the web site itself, to point to
the new location...

>   I would just prefer the we do use another directory or a
>   sub-directory, because there is a lot of file generation, and
>   copying of scripts to destination dir.  If webdir is current
>   directory, it will be very hard to clean the mess afterwards...

But, isn't what the ari/ subdirectory is already about? In other
words, when you call the main script, it'll generate the data
in ./ari...

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v3] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-05-29 13:13               ` Pedro Alves
  2012-05-31  6:56                 ` Pierre Muller
  2012-06-14 12:36                 ` [RFA-v4] " Pierre Muller
@ 2012-06-22 16:10                 ` Tom Tromey
  2 siblings, 0 replies; 32+ messages in thread
From: Tom Tromey @ 2012-06-22 16:10 UTC (permalink / raw)
  To: Pedro Alves
  Cc: Joel Brobecker, Pierre Muller, gdb-patches,
	'Jan Kratochvil', 'Sergio Durigan Junior'

>>>>> "Pedro" == Pedro Alves <palves@redhat.com> writes:

Pedro> I think it should.  "contrib" is by definition a space for third party
Pedro> contributed sources that we ship along, ergo not really part of GDB,
Pedro> unlike the cli, mi, or tui, regformats, etc. subdirectories.

I very strongly dislike ChangeLog proliferation.  I don't see any
benefit in multiple ChangeLogs, but several disadvantages.  For example,
they make more work when submitting patches that span directories.  And,
they are actually less clear in this situation, because the relevant
bits of a patch are described in multiple places.  (This alone makes the
git log more useful than ChangeLog...)

Tom


^ permalink raw reply	[flat|nested] 32+ messages in thread

* About RFA for Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-06-14 16:22                       ` Joel Brobecker
@ 2012-08-21 10:27                         ` Pierre Muller
  2012-08-21 22:36                           ` Sergio Durigan Junior
       [not found]                         ` <50336283.a2db440a.600c.105dSMTPIN_ADDED@mx.google.com>
  1 sibling, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-08-21 10:27 UTC (permalink / raw)
  To: gdb-patches
  Cc: 'Pedro Alves', 'Jan Kratochvil',
	'Sergio Durigan Junior', 'Joel Brobecker'


  I made several proposals to include the ARI script in the main source of
gdb, but I had the strong impression to get comments/suggestions for changes
that went in opposite directions in several replies I got in the 
four versions I sent.

  Thus, I left for a nice holidays without having been able to
commit anything.

  In order to avoid further digressions and as the thread is now already
quite old,
I was wondering if we should not first discuss of a general scheme
as to how we should proceed here.

  My plan was to:
1) create a contrib/ARI subdirectory
in which I would copy the existing scripts.

2) Remove useless code in new location and fix 
biggest errors in existing scripts only, leaving
other improvements for later (so that they would appear
in cvs history).

3) Add a minimal number of new scripts to allow easy use 
from within this new location.


One of the issues was about where we should put the corresponding ChangeLog
entries.

  I would like to restart the discussion
from here.


Pierre Muller
as potential ARI maintainer...



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: About RFA for Add scripts to generate ARI web pages to gdb/contrib/ari directory
       [not found]                         ` <50336283.a2db440a.600c.105dSMTPIN_ADDED@mx.google.com>
@ 2012-08-21 22:25                           ` Doug Evans
  0 siblings, 0 replies; 32+ messages in thread
From: Doug Evans @ 2012-08-21 22:25 UTC (permalink / raw)
  To: Pierre Muller
  Cc: gdb-patches, Pedro Alves, Jan Kratochvil, Sergio Durigan Junior,
	Joel Brobecker

On Tue, Aug 21, 2012 at 3:26 AM, Pierre Muller
<pierre.muller@ics-cnrs.unistra.fr> wrote:
>
>   I made several proposals to include the ARI script in the main source of
> gdb, but I had the strong impression to get comments/suggestions for changes
> that went in opposite directions in several replies I got in the
> four versions I sent.
>
>   Thus, I left for a nice holidays without having been able to
> commit anything.
>
>   In order to avoid further digressions and as the thread is now already
> quite old,
> I was wondering if we should not first discuss of a general scheme
> as to how we should proceed here.
>
>   My plan was to:
> 1) create a contrib/ARI subdirectory
> in which I would copy the existing scripts.

contrib/ari?

> 2) Remove useless code in new location and fix
> biggest errors in existing scripts only, leaving
> other improvements for later (so that they would appear
> in cvs history).
>
> 3) Add a minimal number of new scripts to allow easy use
> from within this new location.

Sounds great.

>
> One of the issues was about where we should put the corresponding ChangeLog
> entries.

gdb/ChangeLog is fine, I think.

>   I would like to restart the discussion
> from here.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: About RFA for Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-08-21 10:27                         ` About RFA for " Pierre Muller
@ 2012-08-21 22:36                           ` Sergio Durigan Junior
  0 siblings, 0 replies; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-08-21 22:36 UTC (permalink / raw)
  To: Pierre Muller
  Cc: gdb-patches, 'Pedro Alves', 'Jan Kratochvil',
	'Joel Brobecker'

On Tuesday, August 21 2012, Pierre Muller wrote:

>   In order to avoid further digressions and as the thread is now already
> quite old,
> I was wondering if we should not first discuss of a general scheme
> as to how we should proceed here.

Sorry, I don't remember much of thread right now, but I glanced over the
archives and it seems that things were discussed quite a bit and your
patches were mostly OK.

>   My plan was to:
> 1) create a contrib/ARI subdirectory
> in which I would copy the existing scripts.

src/gdb/contrib/ari, as Doung pointed out.

> 2) Remove useless code in new location and fix 
> biggest errors in existing scripts only, leaving
> other improvements for later (so that they would appear
> in cvs history).

Great.

> 3) Add a minimal number of new scripts to allow easy use 
> from within this new location.

Great.

> One of the issues was about where we should put the corresponding ChangeLog
> entries.

ISTR that the maintainers are not found of creating new ChangeLog files
on every new directory, so you should put the entries in gdb/ChangeLog
as usual, IIUC.

>   I would like to restart the discussion
> from here.

> Pierre Muller
> as potential ARI maintainer...

Is there anything else?  I really don't remember.  If not, I guess you,
as the potential ARI maintainer, can take the next step and commit this
to the tree.  But that's my opinion, of course.

Thanks,

-- 
Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-06-14 12:36                 ` [RFA-v4] " Pierre Muller
  2012-06-14 16:02                   ` Joel Brobecker
@ 2012-09-26 22:15                   ` Pierre Muller
  2012-10-08 21:21                     ` Pierre Muller
  2012-10-22 21:04                     ` Joel Brobecker
  1 sibling, 2 replies; 32+ messages in thread
From: Pierre Muller @ 2012-09-26 22:15 UTC (permalink / raw)
  To: gdb-patches

[-- Attachment #1: Type: text/plain, Size: 887 bytes --]

  Here is again my patch to include
ARI web page creation into gdb/contrib directory
  This is almost v4 except that
the ChangeLog entry is directly in gdb directory
as most people seemed to be opposed to creating a ChangeLog file in
contrib. subdirectory.

  Joel made some suggestions about changing create-web-ari-in-src.sh
in order to create all files directly in the same directory,
but these script generate a lot of "useless" files
and having them together with the cvs files still worries me.
  

Pierre Muller
GDB pascal language maintainer


gdb/ChangeLog entry:

2012-09-27  Pierre Muller  <muller@ics.u-strasbg.fr>

        Incorporate ARI web page generator into GDB sources.
        * contrib/ari/create-web-ari-in-src.sh: New file.
        * contrib/ari/gdb_ari.sh: New file.
        * contrib/ari/gdb_find.sh: New file.
        * contrib/ari/update-web-ari.sh: New file.

[-- Attachment #2: contrib-ari.patch --]
[-- Type: application/octet-stream, Size: 71150 bytes --]

projecttype:gdb
revision:HEAD
email:muller@ics.u-strasbg.fr

2012-09-27  Pierre Muller  <muller@ics.u-strasbg.fr>

	Incorporate ARI web page generator into GDB sources.
	* contrib/ari/create-web-ari-in-src.sh: New file.
	* contrib/ari/gdb_ari.sh: New file.
	* contrib/ari/gdb_find.sh: New file.
	* contrib/ari/update-web-ari.sh: New file.

Index: create-web-ari-in-src.sh
===================================================================
RCS file: create-web-ari-in-src.sh
diff -N create-web-ari-in-src.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ create-web-ari-in-src.sh	26 Sep 2012 21:46:43 -0000
@@ -0,0 +1,77 @@
+#! /bin/sh
+
+# GDB script to create web ARI page directly from within gdb/ari directory.
+#
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Determine directory of current script.
+scriptpath=`dirname $0`
+# If "scriptpath" is a relative path, then convert it to absolute.
+if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
+    scriptpath="`pwd`/${scriptpath}"
+fi
+
+# update-web-ari.sh script wants four parameters
+# 1: directory of checkout src or gdb-RELEASE for release sources.
+# 2: a temp directory.
+# 3: a directory for generated web page.
+# 4: The name of the current package, must be gdb here.
+# Here we provide default values for these 4 parameters
+
+# srcdir parameter
+if [ -z "${srcdir}" ] ; then
+  srcdir=${scriptpath}/../../..
+fi
+
+# Determine location of a temporary directory to be used by
+# update-web-ari.sh script.
+if [ -z "${tempdir}" ] ; then
+  if [ ! -z "$TMP" ] ; then
+    tempdir=$TMP/create-ari
+  elif [ ! -z "$TEMP" ] ; then
+    tempdir=$TEMP/create-ari
+  else
+    tempdir=/tmp/create-ari
+  fi
+fi
+
+# Default location of generate index.hmtl web page.
+if [ -z "${webdir}" ] ; then
+# Use 'branch' subdir name if Tag contains branch
+  if [ -f "${srcdir}/gdb/CVS/Tag" ] ; then
+    tagname=`cat "${srcdir}/gdb/CVS/Tag"`
+  else
+    tagname=trunk
+  fi
+  if [ "${tagname#branch}" != "${tagname}" ] ; then
+    subdir=branch
+  else
+    subdir=trunk
+  fi
+  webdir=`pwd`/${subdir}/ari
+fi
+
+# Launch update-web-ari.sh in same directory as current script.
+${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
+
+if [ -f "${webdir}/index.html" ] ; then
+  echo "ARI output can be viewed in file \"${webdir}/index.html\""
+else
+  echo "ARI script failed to generate file \"${webdir}/index.html\""
+fi
+
Index: gdb_ari.sh
===================================================================
RCS file: gdb_ari.sh
diff -N gdb_ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ gdb_ari.sh	26 Sep 2012 21:46:43 -0000
@@ -0,0 +1,1351 @@
+#!/bin/sh
+
+# GDB script to list of problems using awk.
+#
+# Copyright (C) 2002-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=c ; export LANG
+LC_ALL=c ; export LC_ALL
+
+# Permanent checks take the form:
+
+#     Do not use XXXX, ISO C 90 implies YYYY
+#     Do not use XXXX, instead use YYYY''.
+
+# and should never be removed.
+
+# Temporary checks take the form:
+
+#     Replace XXXX with YYYY
+
+# and once they reach zero, can be eliminated.
+
+# FIXME: It should be able to override this on the command line
+error="regression"
+warning="regression"
+ari="regression eol code comment deprecated legacy obsolete gettext"
+all="regression eol code comment deprecated legacy obsolete gettext deprecate internal gdbarch macro"
+print_doc=0
+print_idx=0
+
+usage ()
+{
+    cat <<EOF 1>&2
+Error: $1
+
+Usage:
+    $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
+Options:
+  --print-doc    Print a list of all potential problems, then exit.
+  --print-idx    Include the problems IDX (index or key) in every message.
+  --src=file     Write source lines to file.
+  -Werror        Treat all problems as errors.
+  -Wall          Report all problems.
+  -Wari          Report problems that should be fixed in new code.
+  -W<category>   Report problems in the specifed category.  Vaid categories
+                 are: ${all}
+EOF
+    exit 1
+}
+
+
+# Parse the various options
+Woptions=
+srclines=""
+while test $# -gt 0
+do
+    case "$1" in
+    -Wall ) Woptions="${all}" ;;
+    -Wari ) Woptions="${ari}" ;;
+    -Werror ) Werror=1 ;;
+    -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
+    --print-doc ) print_doc=1 ;;
+    --print-idx ) print_idx=1 ;;
+    --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
+    -- ) shift ; break ;;
+    - ) break ;;
+    -* ) usage "$1: unknown option" ;;
+    * ) break ;;
+    esac
+    shift
+done
+if test -n "$Woptions" ; then
+    warning="$Woptions"
+    error=
+fi
+
+
+# -Werror implies treating all warnings as errors.
+if test -n "${Werror}" ; then
+    error="${error} ${warning}"
+fi
+
+
+# Validate all errors and warnings.
+for w in ${warning} ${error}
+do
+    case " ${all} " in
+    *" ${w} "* ) ;;
+    * ) usage "Unknown option -W${w}" ;;
+    esac
+done
+
+
+# make certain that there is at least one file.
+if test $# -eq 0 -a ${print_doc} = 0
+then
+    usage "Missing file."
+fi
+
+
+# Convert the errors/warnings into corresponding array entries.
+for a in ${all}
+do
+    aris="${aris} ari_${a} = \"${a}\";"
+done
+for w in ${warning}
+do
+    warnings="${warnings} warning[ari_${w}] = 1;"
+done
+for e in ${error}
+do
+    errors="${errors} error[ari_${e}]  = 1;"
+done
+
+if [ "$AWK" == "" ] ; then
+  AWK=awk
+fi
+
+${AWK} -- '
+BEGIN {
+    # NOTE, for a per-file begin use "FNR == 1".
+    '"${aris}"'
+    '"${errors}"'
+    '"${warnings}"'
+    '"${srclines}"'
+    print_doc =  '$print_doc'
+    print_idx =  '$print_idx'
+    PWD = "'`pwd`'"
+}
+
+# Print the error message for BUG.  Append SUPLEMENT if non-empty.
+function print_bug(file,line,prefix,category,bug,doc,supplement, suffix,idx) {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    if (supplement) {
+	suffix = " (" supplement ")"
+    } else {
+	suffix = ""
+    }
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print file ":" line ": " prefix category ": " idx doc suffix
+    if (srclines != "") {
+	print file ":" line ":" $0 >> srclines
+    }
+}
+
+function fix(bug,file,count) {
+    skip[bug, file] = count
+    skipped[bug, file] = 0
+}
+
+function fail(bug,supplement) {
+    if (doc[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing doc for bug " bug)
+	exit
+    }
+    if (category[bug] == "") {
+	print_bug("", 0, "internal: ", "internal", "internal", "Missing category for bug " bug)
+	exit
+    }
+
+    if (ARI_OK == bug) {
+	return
+    }
+    # Trim the filename down to just DIRECTORY/FILE so that it can be
+    # robustly used by the FIX code.
+
+    if (FILENAME ~ /^\//) {
+	canonicalname = FILENAME
+    } else {
+        canonicalname = PWD "/" FILENAME
+    }
+    shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1, canonicalname)
+
+    skipped[bug, shortname]++
+    if (skip[bug, shortname] >= skipped[bug, shortname]) {
+	# print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME], bug
+	# Do nothing
+    } else if (error[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug], supplement)
+    } else if (warning[category[bug]]) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug], supplement)
+    }
+}
+
+FNR == 1 {
+    seen[FILENAME] = 1
+    if (match(FILENAME, "\\.[ly]$")) {
+      # FILENAME is a lex or yacc source
+      is_yacc_or_lex = 1
+    }
+    else {
+      is_yacc_or_lex = 0
+    }
+}
+END {
+    if (print_idx) {
+	idx = bug ": "
+    } else {
+	idx = ""
+    }
+    # Did we do only a partial skip?
+    for (bug_n_file in skip) {
+	split (bug_n_file, a, SUBSEP)
+	bug = a[1]
+	file = a[2]
+	if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    b = file " missing " bug
+	    print_bug(file, 0, "", "internal", file " missing " bug, "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file ", only found " skipped[bug_n_file])
+	}
+    }
+}
+
+
+# Skip OBSOLETE lines
+/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
+
+# Skip ARI lines
+
+BEGIN {
+    ARI_OK = ""
+}
+
+/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = gensub(/^.*\/\* ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
+    # print "ARI line found \"" $0 "\""
+    # print "ARI_OK \"" ARI_OK "\""
+}
+! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
+    ARI_OK = ""
+}
+
+
+# Things in comments
+
+BEGIN { doc["GNU/Linux"] = "\
+Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux system'\'';\
+ comments should clearly differentiate between the two (this test assumes that\
+ word `Linux'\'' appears on the same line as the word `GNU'\'' or `kernel'\''\
+ or a kernel version"
+    category["GNU/Linux"] = ari_comment
+}
+/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
+&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
+    fail("GNU/Linux")
+}
+
+BEGIN { doc["ARGSUSED"] = "\
+Do not use ARGSUSED, unnecessary"
+    category["ARGSUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
+    fail("ARGSUSED")
+}
+
+
+# SNIP - Strip out comments - SNIP
+
+FNR == 1 {
+    comment_p = 0
+}
+comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p = 0; }
+comment_p { next; }
+!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
+!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
+
+
+BEGIN { doc["_ markup"] = "\
+All messages should be marked up with _."
+    category["_ markup"] = ari_gettext
+}
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:space:]]*\([^_\(a-z]/ {
+    if (! /\("%s"/) {
+	fail("_ markup")
+    }
+}
+
+BEGIN { doc["trailing new line"] = "\
+A message should not have a trailing new line"
+    category["trailing new line"] = ari_gettext
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
+    fail("trailing new line")
+}
+
+# Include files for which GDB has a custom version.
+
+BEGIN { doc["assert.h"] = "\
+Do not include assert.h, instead include \"gdb_assert.h\"";
+    category["assert.h"] = ari_regression
+    fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
+}
+/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
+    fail("assert.h")
+}
+
+BEGIN { doc["dirent.h"] = "\
+Do not include dirent.h, instead include gdb_dirent.h"
+    category["dirent.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
+    fail("dirent.h")
+}
+
+BEGIN { doc["regex.h"] = "\
+Do not include regex.h, instead include gdb_regex.h"
+    category["regex.h"] = ari_regression
+    fix("regex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
+    fail("regex.h")
+}
+
+BEGIN { doc["xregex.h"] = "\
+Do not include xregex.h, instead include gdb_regex.h"
+    category["xregex.h"] = ari_regression
+    fix("xregex.h", "gdb/gdb_regex.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
+    fail("xregex.h")
+}
+
+BEGIN { doc["gnu-regex.h"] = "\
+Do not include gnu-regex.h, instead include gdb_regex.h"
+    category["gnu-regex.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
+    fail("gnu regex.h")
+}
+
+BEGIN { doc["stat.h"] = "\
+Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
+    category["stat.h"] = ari_regression
+    fix("stat.h", "gdb/gdb_stat.h", 1)
+}
+/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
+    fail("stat.h")
+}
+
+BEGIN { doc["wait.h"] = "\
+Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
+    fix("wait.h", "gdb/gdb_wait.h", 2);
+    category["wait.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
+|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
+    fail("wait.h")
+}
+
+BEGIN { doc["vfork.h"] = "\
+Do not include vfork.h, instead include gdb_vfork.h"
+    fix("vfork.h", "gdb/gdb_vfork.h", 1);
+    category["vfork.h"] = ari_regression
+}
+/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
+    fail("vfork.h")
+}
+
+BEGIN { doc["error not internal-warning"] = "\
+Do not use error(\"internal-warning\"), instead use internal_warning"
+    category["error not internal-warning"] = ari_regression
+}
+/error.*\"[Ii]nternal.warning/ {
+    fail("error not internal-warning")
+}
+
+BEGIN { doc["%p"] = "\
+Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
+target address, or host_address_to_string() for a host address"
+    category["%p"] = ari_code
+}
+/%p/ && !/%prec/ {
+    fail("%p")
+}
+
+BEGIN { doc["%ll"] = "\
+Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
+`long long'\'' value"
+    category["%ll"] = ari_code
+}
+# Allow %ll in scanf
+/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
+    fail("%ll")
+}
+
+
+# SNIP - Strip out strings - SNIP
+
+# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
+FNR == 1 {
+    string_p = 0
+    trace_string = 0
+}
+# Strip escaped characters.
+{ gsub(/\\./, "."); }
+# Strip quoted quotes.
+{ gsub(/'\''.'\''/, "'\''.'\''"); }
+# End of multi-line string
+string_p && /\"/ {
+    if (trace_string) print "EOS:" FNR, $0;
+    gsub (/^[^\"]*\"/, "'\''");
+    string_p = 0;
+}
+# Middle of multi-line string, discard line.
+string_p {
+    if (trace_string) print "MOS:" FNR, $0;
+    $0 = ""
+}
+# Strip complete strings from the middle of the line
+!string_p && /\"[^\"]*\"/ {
+    if (trace_string) print "COS:" FNR, $0;
+    gsub (/\"[^\"]*\"/, "'\''");
+}
+# Start of multi-line string
+BEGIN { doc["multi-line string"] = "\
+Multi-line string must have the newline escaped"
+    category["multi-line string"] = ari_regression
+}
+!string_p && /\"/ {
+    if (trace_string) print "SOS:" FNR, $0;
+    if (/[^\\]$/) {
+	fail("multi-line string")
+    }
+    gsub (/\"[^\"]*$/, "'\''");
+    string_p = 1;
+}
+# { print }
+
+# Multi-line string
+string_p &&
+
+# Accumulate continuation lines
+FNR == 1 {
+    cont_p = 0
+}
+!cont_p { full_line = ""; }
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+
+# GDB uses ISO C 90.  Check for any non pure ISO C 90 code
+
+BEGIN { doc["PARAMS"] = "\
+Do not use PARAMS(), ISO C 90 implies prototypes"
+    category["PARAMS"] = ari_regression
+}
+/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
+    fail("PARAMS")
+}
+
+BEGIN { doc["__func__"] = "\
+Do not use __func__, ISO C 90 does not support this macro"
+    category["__func__"] = ari_regression
+    fix("__func__", "gdb/gdb_assert.h", 1)
+}
+/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
+    fail("__func__")
+}
+
+BEGIN { doc["__FUNCTION__"] = "\
+Do not use __FUNCTION__, ISO C 90 does not support this macro"
+    category["__FUNCTION__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
+    fail("__FUNCTION__")
+}
+
+BEGIN { doc["__CYGWIN32__"] = "\
+Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
+autoconf tests"
+    category["__CYGWIN32__"] = ari_regression
+}
+/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
+    fail("__CYGWIN32__")
+}
+
+BEGIN { doc["PTR"] = "\
+Do not use PTR, ISO C 90 implies `void *'\''"
+    category["PTR"] = ari_regression
+    #fix("PTR", "gdb/utils.c", 6)
+}
+/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
+    fail("PTR")
+}
+
+BEGIN { doc["UCASE function"] = "\
+Function name is uppercase."
+    category["UCASE function"] = ari_code
+    possible_UCASE = 0
+    UCASE_full_line = ""
+}
+(possible_UCASE) {
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    # Closing brace found?
+    else if (UCASE_full_line ~ \
+	/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((UCASE_full_line ~ \
+	    /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = UCASE_full_line;
+	    fail("UCASE function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_UCASE = 0
+	UCASE_full_line = ""
+    } else {
+	UCASE_full_line = UCASE_full_line $0;
+    }
+}
+/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_UCASE = 1
+    if (ARI_OK == "UCASE function") {
+	possible_UCASE = 0
+    }
+    possible_FNR = FNR
+    UCASE_full_line = $0
+}
+
+
+BEGIN { doc["editCase function"] = "\
+Function name starts lower case but has uppercased letters."
+    category["editCase function"] = ari_code
+    possible_editCase = 0
+    editCase_full_line = ""
+}
+(possible_editCase) {
+    if (ARI_OK == "ediCase function") {
+	possible_editCase = 0
+    }
+    # Closing brace found?
+    else if (editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
+	if ((editCase_full_line ~ \
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
+	    && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
+	    store_FNR = FNR
+	    FNR = possible_FNR
+	    store_0 = $0;
+	    $0 = editCase_full_line;
+	    fail("editCase function")
+	    FNR = store_FNR
+	    $0 = store_0;
+	}
+	possible_editCase = 0
+	editCase_full_line = ""
+    } else {
+	editCase_full_line = editCase_full_line $0;
+    }
+}
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
+    possible_editCase = 1
+    if (ARI_OK == "editCase function") {
+        possible_editCase = 0
+    }
+    possible_FNR = FNR
+    editCase_full_line = $0
+}
+
+# Only function implementation should be on first column
+BEGIN { doc["function call in first column"] = "\
+Function name in first column should be restricted to function implementation"
+    category["function call in first column"] = ari_code
+}
+/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
+    fail("function call in first column")
+}
+
+
+# Functions without any parameter should have (void)
+# after their name not simply ().
+BEGIN { doc["no parameter function"] = "\
+Function having no parameter should be declared with funcname (void)."
+    category["no parameter function"] = ari_code
+}
+/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
+    fail("no parameter function")
+}
+
+BEGIN { doc["hash"] = "\
+Do not use ` #...'\'', instead use `#...'\''(some compilers only correctly \
+parse a C preprocessor directive when `#'\'' is the first character on \
+the line)"
+    category["hash"] = ari_regression
+}
+/^[[:space:]]+#/ {
+    fail("hash")
+}
+
+BEGIN { doc["OP eol"] = "\
+Do not use &&, or || at the end of a line"
+    category["OP eol"] = ari_code
+}
+/(\|\||\&\&|==|!=)[[:space:]]*$/ {
+    fail("OP eol")
+}
+
+BEGIN { doc["strerror"] = "\
+Do not use strerror(), instead use safe_strerror()"
+    category["strerror"] = ari_regression
+    fix("strerror", "gdb/gdb_string.h", 1)
+    fix("strerror", "gdb/mingw-hdep.c", 1)
+    fix("strerror", "gdb/posix-hdep.c", 1)
+}
+/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
+    fail("strerror")
+}
+
+BEGIN { doc["long long"] = "\
+Do not use `long long'\'', instead use LONGEST"
+    category["long long"] = ari_code
+    # defs.h needs two such patterns for LONGEST and ULONGEST definitions
+    fix("long long", "gdb/defs.h", 2)
+}
+/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
+    fail("long long")
+}
+
+BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
+Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror and, \
+consequently, is not able to tolerate false warnings.  Since -Wunused-param \
+produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
+are used by GDB"
+    category["ATTRIBUTE_UNUSED"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
+    fail("ATTRIBUTE_UNUSED")
+}
+
+BEGIN { doc["ATTR_FORMAT"] = "\
+Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
+    category["ATTR_FORMAT"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
+    fail("ATTR_FORMAT")
+}
+
+BEGIN { doc["ATTR_NORETURN"] = "\
+Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["ATTR_NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
+    fail("ATTR_NORETURN")
+}
+
+BEGIN { doc["NORETURN"] = "\
+Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
+    category["NORETURN"] = ari_regression
+}
+/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
+    fail("NORETURN")
+}
+
+
+# General problems
+
+BEGIN { doc["multiple messages"] = "\
+Do not use multiple calls to warning or error, instead use a single call"
+    category["multiple messages"] = ari_gettext
+}
+FNR == 1 {
+    warning_fnr = -1
+}
+/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
+    if (FNR == warning_fnr + 1) {
+	fail("multiple messages")
+    } else {
+	warning_fnr = FNR
+    }
+}
+
+# Commented out, but left inside sources, just in case.
+# BEGIN { doc["inline"] = "\
+# Do not use the inline attribute; \
+# since the compiler generally ignores this, better algorithm selection \
+# is needed to improved performance"
+#    category["inline"] = ari_code
+# }
+# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
+#     fail("inline")
+# }
+
+# This test is obsolete as this type
+# has been deprecated and finally suppressed from GDB sources
+#BEGIN { doc["obj_private"] = "\
+#Replace obj_private with objfile_data"
+#    category["obj_private"] = ari_obsolete
+#}
+#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
+#    fail("obj_private")
+#}
+
+BEGIN { doc["abort"] = "\
+Do not use abort, instead use internal_error; GDB should never abort"
+    category["abort"] = ari_regression
+    fix("abort", "gdb/utils.c", 3)
+}
+/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
+    fail("abort")
+}
+
+BEGIN { doc["basename"] = "\
+Do not use basename, instead use lbasename"
+    category["basename"] = ari_regression
+}
+/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
+    fail("basename")
+}
+
+BEGIN { doc["assert"] = "\
+Do not use assert, instead use gdb_assert or internal_error; assert \
+calls abort and GDB should never call abort"
+    category["assert"] = ari_regression
+}
+/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
+    fail("assert")
+}
+
+BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
+Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
+    category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
+}
+/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
+    fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
+}
+
+BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
+Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
+    category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
+}
+/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
+    fail("ADD_SHARED_SYMBOL_FILES")
+}
+
+BEGIN { doc["SOLIB_ADD"] = "\
+Replace SOLIB_ADD with nothing, not needed?"
+    category["SOLIB_ADD"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
+    fail("SOLIB_ADD")
+}
+
+BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
+Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
+    category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
+    fail("SOLIB_CREATE_INFERIOR_HOOK")
+}
+
+BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
+Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
+    category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
+}
+/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
+    fail("SOLIB_LOADED_LIBRARY_PATHNAME")
+}
+
+BEGIN { doc["REGISTER_U_ADDR"] = "\
+Replace REGISTER_U_ADDR with nothing, not needed?"
+    category["REGISTER_U_ADDR"] = ari_regression
+}
+/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
+    fail("REGISTER_U_ADDR")
+}
+
+BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
+Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
+    category["PROCESS_LINENUMBER_HOOK"] = ari_regression
+}
+/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
+    fail("PROCESS_LINENUMBER_HOOK")
+}
+
+BEGIN { doc["PC_SOLIB"] = "\
+Replace PC_SOLIB with nothing, not needed?"
+    category["PC_SOLIB"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
+    fail("PC_SOLIB")
+}
+
+BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
+Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
+    category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
+}
+/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
+    fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
+}
+
+BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
+Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
+    category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
+    fail("GCC2_COMPILED_FLAG_SYMBOL")
+}
+
+BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
+Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
+    category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
+}
+/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
+    fail("FUNCTION_EPILOGUE_SIZE")
+}
+
+BEGIN { doc["HAVE_VFORK"] = "\
+Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
+unconditionally"
+    category["HAVE_VFORK"] = ari_regression
+}
+/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
+    fail("HAVE_VFORK")
+}
+
+BEGIN { doc["bcmp"] = "\
+Do not use bcmp(), ISO C 90 implies memcmp()"
+    category["bcmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
+    fail("bcmp")
+}
+
+BEGIN { doc["setlinebuf"] = "\
+Do not use setlinebuf(), ISO C 90 implies setvbuf()"
+    category["setlinebuf"] = ari_regression
+}
+/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
+    fail("setlinebuf")
+}
+
+BEGIN { doc["bcopy"] = "\
+Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
+    category["bcopy"] = ari_regression
+}
+/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
+    fail("bcopy")
+}
+
+BEGIN { doc["get_frame_base"] = "\
+Replace get_frame_base with get_frame_id, get_frame_base_address, \
+get_frame_locals_address, or get_frame_args_address."
+    category["get_frame_base"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
+    fail("get_frame_base")
+}
+
+BEGIN { doc["floatformat_to_double"] = "\
+Do not use floatformat_to_double() from libierty, \
+instead use floatformat_to_doublest()"
+    fix("floatformat_to_double", "gdb/doublest.c", 1)
+    category["floatformat_to_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
+    fail("floatformat_to_double")
+}
+
+BEGIN { doc["floatformat_from_double"] = "\
+Do not use floatformat_from_double() from libierty, \
+instead use floatformat_from_doublest()"
+    category["floatformat_from_double"] = ari_regression
+}
+/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
+    fail("floatformat_from_double")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["LITTLE_ENDIAN"] = "\
+Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
+    category["LITTLE_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("LITTLE_ENDIAN")
+}
+
+BEGIN { doc["BIG_ENDIAN"] = "\
+Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
+    category["BIG_ENDIAN"] = ari_regression
+}
+/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
+    fail("BIG_ENDIAN")
+}
+
+BEGIN { doc["sec_ptr"] = "\
+Instead of sec_ptr, use struct bfd_section";
+    category["sec_ptr"] = ari_regression
+}
+/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
+    fail("sec_ptr")
+}
+
+BEGIN { doc["frame_unwind_unsigned_register"] = "\
+Replace frame_unwind_unsigned_register with frame_unwind_register_unsigned"
+    category["frame_unwind_unsigned_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
+    fail("frame_unwind_unsigned_register")
+}
+
+BEGIN { doc["frame_register_read"] = "\
+Replace frame_register_read() with get_frame_register(), or \
+possibly introduce a new method safe_get_frame_register()"
+    category["frame_register_read"] = ari_obsolete
+}
+/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
+    fail("frame_register_read")
+}
+
+BEGIN { doc["read_register"] = "\
+Replace read_register() with regcache_read() et.al."
+    category["read_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
+    fail("read_register")
+}
+
+BEGIN { doc["write_register"] = "\
+Replace write_register() with regcache_read() et.al."
+    category["write_register"] = ari_regression
+}
+/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
+    fail("write_register")
+}
+
+function report(name) {
+    # Drop any trailing _P.
+    name = gensub(/(_P|_p)$/, "", 1, name)
+    # Convert to lower case
+    name = tolower(name)
+    # Split into category and bug
+    cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
+    bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
+    # Report it
+    name = cat " " bug
+    doc[name] = "Do not use " cat " " bug ", see declaration for details"
+    category[name] = cat
+    fail(name)
+}
+
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|legacy|set_gdbarch_legacy)_/ {
+    line = $0
+    # print "0 =", $0
+    while (1) {
+	name = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\2", 1, line)
+	line = gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:]]*)(.*)$/, "\\1 \\4", 1, line)
+	# print "name =", name, "line =", line
+	if (name == line) break;
+	report(name)
+    }
+}
+
+# Count the number of times each architecture method is set
+/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
+    name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
+    doc["set " name] = "\
+Call to set_gdbarch_" name
+    category["set " name] = ari_gdbarch
+    fail("set " name)
+}
+
+# Count the number of times each tm/xm/nm macro is defined or undefined
+/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
+&& !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/ \
+&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
+    basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
+    type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
+    name = gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/, "\\2", 1, $0)
+    if (type == basename) {
+        type = "macro"
+    }
+    doc[type " " name] = "\
+Do not define macros such as " name " in a tm, nm or xm file, \
+in fact do not provide a tm, nm or xm file"
+    category[type " " name] = ari_macro
+    fail(type " " name)
+}
+
+BEGIN { doc["deprecated_registers"] = "\
+Replace deprecated_registers with nothing, they have reached \
+end-of-life"
+    category["deprecated_registers"] = ari_eol
+}
+/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
+    fail("deprecated_registers")
+}
+
+BEGIN { doc["read_pc"] = "\
+Replace READ_PC() with frame_pc_unwind; \
+at present the inferior function call code still uses this"
+    category["read_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
+    fail("read_pc")
+}
+
+BEGIN { doc["write_pc"] = "\
+Replace write_pc() with get_frame_base_address or get_frame_id; \
+at present the inferior function call code still uses this when doing \
+a DECR_PC_AFTER_BREAK"
+    category["write_pc"] = ari_deprecate
+}
+/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
+    fail("write_pc")
+}
+
+BEGIN { doc["generic_target_write_pc"] = "\
+Replace generic_target_write_pc with a per-architecture implementation, \
+this relies on PC_REGNUM which is being eliminated"
+    category["generic_target_write_pc"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
+    fail("generic_target_write_pc")
+}
+
+BEGIN { doc["read_sp"] = "\
+Replace read_sp() with frame_sp_unwind"
+    category["read_sp"] = ari_regression
+}
+/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
+/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
+    fail("read_sp")
+}
+
+BEGIN { doc["register_cached"] = "\
+Replace register_cached() with nothing, does not have a regcache parameter"
+    category["register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
+    fail("register_cached")
+}
+
+BEGIN { doc["set_register_cached"] = "\
+Replace set_register_cached() with nothing, does not have a regcache parameter"
+    category["set_register_cached"] = ari_regression
+}
+/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
+    fail("set_register_cached")
+}
+
+# Print functions: Use versions that either check for buffer overflow
+# or safely allocate a fresh buffer.
+
+BEGIN { doc["sprintf"] = "\
+Do not use sprintf, instead use xsnprintf or xstrprintf"
+    category["sprintf"] = ari_code
+}
+/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
+    fail("sprintf")
+}
+
+BEGIN { doc["vsprintf"] = "\
+Do not use vsprintf(), instead use xstrvprintf"
+    category["vsprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
+    fail("vsprintf")
+}
+
+BEGIN { doc["asprintf"] = "\
+Do not use asprintf(), instead use xstrprintf()"
+    category["asprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
+    fail("asprintf")
+}
+
+BEGIN { doc["vasprintf"] = "\
+Do not use vasprintf(), instead use xstrvprintf"
+    fix("vasprintf", "gdb/utils.c", 1)
+    category["vasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
+    fail("vasprintf")
+}
+
+BEGIN { doc["xasprintf"] = "\
+Do not use xasprintf(), instead use xstrprintf"
+    fix("xasprintf", "gdb/defs.h", 1)
+    fix("xasprintf", "gdb/utils.c", 1)
+    category["xasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
+    fail("xasprintf")
+}
+
+BEGIN { doc["xvasprintf"] = "\
+Do not use xvasprintf(), instead use xstrvprintf"
+    fix("xvasprintf", "gdb/defs.h", 1)
+    fix("xvasprintf", "gdb/utils.c", 1)
+    category["xvasprintf"] = ari_regression
+}
+/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
+    fail("xvasprintf")
+}
+
+# More generic memory operations
+
+BEGIN { doc["bzero"] = "\
+Do not use bzero(), instead use memset()"
+    category["bzero"] = ari_regression
+}
+/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
+    fail("bzero")
+}
+
+BEGIN { doc["strdup"] = "\
+Do not use strdup(), instead use xstrdup()";
+    category["strdup"] = ari_regression
+}
+/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
+    fail("strdup")
+}
+
+BEGIN { doc["strsave"] = "\
+Do not use strsave(), instead use xstrdup() et.al."
+    category["strsave"] = ari_regression
+}
+/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
+    fail("strsave")
+}
+
+# String compare functions
+
+BEGIN { doc["strnicmp"] = "\
+Do not use strnicmp(), instead use strncasecmp()"
+    category["strnicmp"] = ari_regression
+}
+/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
+    fail("strnicmp")
+}
+
+# Boolean expressions and conditionals
+
+BEGIN { doc["boolean"] = "\
+Do not use `boolean'\'',  use `int'\'' instead"
+    category["boolean"] = ari_regression
+}
+/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("boolean")
+    }
+}
+
+BEGIN { doc["false"] = "\
+Definitely do not use `false'\'' in boolean expressions"
+    category["false"] = ari_regression
+}
+/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("false")
+    }
+}
+
+BEGIN { doc["true"] = "\
+Do not try to use `true'\'' in boolean expressions"
+    category["true"] = ari_regression
+}
+/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
+    if (is_yacc_or_lex == 0) {
+       fail("true")
+    }
+}
+
+# Typedefs that are either redundant or can be reduced to `struct
+# type *''.
+# Must be placed before if assignment otherwise ARI exceptions
+# are not handled correctly.
+
+BEGIN { doc["d_namelen"] = "\
+Do not use dirent.d_namelen, instead use NAMELEN"
+    category["d_namelen"] = ari_regression
+}
+/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
+    fail("d_namelen")
+}
+
+BEGIN { doc["strlen d_name"] = "\
+Do not use strlen dirent.d_name, instead use NAMELEN"
+    category["strlen d_name"] = ari_regression
+}
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$)/ {
+    fail("strlen d_name")
+}
+
+BEGIN { doc["var_boolean"] = "\
+Replace var_boolean with add_setshow_boolean_cmd"
+    category["var_boolean"] = ari_regression
+    fix("var_boolean", "gdb/command.h", 1)
+    # fix only uses the last directory level
+    fix("var_boolean", "cli/cli-decode.c", 2)
+}
+/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
+    if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
+	fail("var_boolean")
+    }
+}
+
+BEGIN { doc["generic_use_struct_convention"] = "\
+Replace generic_use_struct_convention with nothing, \
+EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
+    category["generic_use_struct_convention"] = ari_regression
+}
+/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
+    fail("generic_use_struct_convention")
+}
+
+BEGIN { doc["if assignment"] = "\
+An IF statement'\''s expression contains an assignment (the GNU coding \
+standard discourages this)"
+    category["if assignment"] = ari_code
+}
+BEGIN { doc["if clause more than 50 lines"] = "\
+An IF statement'\''s expression expands over 50 lines"
+    category["if clause more than 50 lines"] = ari_code
+}
+#
+# Accumulate continuation lines
+FNR == 1 {
+    in_if = 0
+}
+
+/(^|[^_[:alnum:]])if / {
+    in_if = 1;
+    if_brace_level = 0;
+    if_cont_p = 0;
+    if_count = 0;
+    if_brace_end_pos = 0;
+    if_full_line = "";
+}
+(in_if)  {
+    # We want everything up to closing brace of same level
+    if_count++;
+    if (if_count > 50) {
+	print "multiline if: " if_full_line $0
+	fail("if clause more than 50 lines")
+	if_brace_level = 0;
+	if_full_line = "";
+    } else {
+	if (if_count == 1) {
+	    i = index($0,"if ");
+	} else {
+	    i = 1;
+	}
+	for (i=i; i <= length($0); i++) {
+	    char = substr($0,i,1);
+	    if (char == "(") { if_brace_level++; }
+	    if (char == ")") {
+		if_brace_level--;
+		if (!if_brace_level) {
+		    if_brace_end_pos = i;
+		    after_if = substr($0,i+1,length($0));
+		    # Do not parse what is following
+		    break;
+		}
+	    }
+	}
+	if (if_brace_level == 0) {
+	    $0 = substr($0,1,i);
+	    in_if = 0;
+	} else {
+	    if_full_line = if_full_line $0;
+	    if_cont_p = 1;
+	    next;
+	}
+    }
+}
+# if we arrive here, we need to concatenate, but we are at brace level 0
+
+(if_brace_end_pos) {
+    $0 = if_full_line substr($0,1,if_brace_end_pos);
+    if (if_count > 1) {
+	# print "IF: multi line " if_count " found at " FILENAME ":" FNR " \"" $0 "\""
+    }
+    if_cont_p = 0;
+    if_full_line = "";
+}
+/(^|[^_[:alnum:]])if .* = / {
+    # print "fail in if " $0
+    fail("if assignment")
+}
+(if_brace_end_pos) {
+    $0 = $0 after_if;
+    if_brace_end_pos = 0;
+    in_if = 0;
+}
+
+# Printout of all found bug
+
+BEGIN {
+    if (print_doc) {
+	for (bug in doc) {
+	    fail(bug)
+	}
+	exit
+    }
+}' "$@"
+
Index: gdb_find.sh
===================================================================
RCS file: gdb_find.sh
diff -N gdb_find.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ gdb_find.sh	26 Sep 2012 21:46:43 -0000
@@ -0,0 +1,41 @@
+#!/bin/sh
+
+# GDB script to create list of files to check using gdb_ari.sh.
+#
+# Copyright (C) 2003-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Make certain that the script is not running in an internationalized
+# environment.
+
+LANG=C ; export LANG
+LC_ALL=C ; export LC_ALL
+
+
+# A find that prunes files that GDB users shouldn't be interested in.
+# Use sort to order files alphabetically.
+
+find "$@" \
+    -name testsuite -prune -o \
+    -name gdbserver -prune -o \
+    -name gnulib -prune -o \
+    -name osf-share -prune -o \
+    -name '*-stub.c' -prune -o \
+    -name '*-exp.c' -prune -o \
+    -name ada-lex.c -prune -o \
+    -name cp-name-parser.c -prune -o \
+    -type f -name '*.[lyhc]' -print | sort
Index: update-web-ari.sh
===================================================================
RCS file: update-web-ari.sh
diff -N update-web-ari.sh
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ update-web-ari.sh	26 Sep 2012 21:46:43 -0000
@@ -0,0 +1,921 @@
+#!/bin/sh -x
+
+# GDB script to create GDB ARI web page.
+#
+# Copyright (C) 2001-2012 Free Software Foundation, Inc.
+#
+# This file is part of GDB.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# TODO: setjmp.h, setjmp and longjmp.
+
+# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
+exec 3>&2 2>&1
+ECHO ()
+{
+#   echo "$@" | tee /dev/fd/3 1>&2
+    echo "$@" 1>&2
+    echo "$@" 1>&3
+}
+
+# Really mindless usage
+if test $# -ne 4
+then
+    echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>" 1>&2
+    exit 1
+fi
+snapshot=$1 ; shift
+tmpdir=$1 ; shift
+wwwdir=$1 ; shift
+project=$1 ; shift
+
+# Try to create destination directory if it doesn't exist yet
+if [ ! -d ${wwwdir} ]
+then
+  mkdir -p ${wwwdir}
+fi
+
+# Fail if destination directory doesn't exist or is not writable
+if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
+then
+  echo ERROR: Can not write to directory ${wwwdir} >&2
+  exit 2
+fi
+
+if [ ! -r ${snapshot} ]
+then
+    echo ERROR: Can not read snapshot file 1>&2
+    exit 1
+fi
+
+# FILE formats
+# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+# Where ``*'' is {source,warning,indent,doschk}
+
+unpack_source_p=true
+delete_source_p=true
+
+check_warning_p=false # broken
+check_indent_p=false # too slow, too many fail
+check_source_p=true
+check_doschk_p=true
+check_werror_p=true
+
+update_doc_p=true
+update_web_p=true
+
+if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
+then
+  AWK=awk
+else
+  AWK=gawk
+fi
+export AWK
+
+# Set up a few cleanups
+if ${delete_source_p}
+then
+    trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
+fi
+
+
+# If the first parameter is a directory,
+#we just use it as the extracted source
+if [ -d ${snapshot} ]
+then
+  module=${project}
+  srcdir=${snapshot}
+  aridir=${srcdir}/${module}/contrib/ari
+  unpack_source_p=false
+  delete_source_p=false
+  version_in=${srcdir}/${module}/version.in
+else
+  # unpack the tar-ball
+  if ${unpack_source_p}
+  then
+    # Was it previously unpacked?
+    if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
+    then
+	/bin/rm -rf "${tmpdir}"
+	/bin/mkdir -p ${tmpdir}
+	if [ ! -d ${tmpdir} ]
+	then
+	    echo "Problem creating work directory"
+	    exit 1
+	fi
+	cd ${tmpdir} || exit 1
+	echo `date`: Unpacking tar-ball ...
+	case ${snapshot} in
+	    *.tar.bz2 ) bzcat ${snapshot} ;;
+	    *.tar ) cat ${snapshot} ;;
+	    * ) ECHO Bad file ${snapshot} ; exit 1 ;;
+	esac | tar xf -
+    fi
+  fi
+
+  module=`basename ${snapshot}`
+  module=`basename ${module} .bz2`
+  module=`basename ${module} .tar`
+  srcdir=`echo ${tmpdir}/${module}*`
+  aridir=${HOME}/ss
+  version_in=${srcdir}/gdb/version.in
+fi
+
+if [ ! -r ${version_in} ]
+then
+    echo ERROR: missing version file 1>&2
+    exit 1
+fi
+version=`cat ${version_in}`
+
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_warning_p} && test -d "${srcdir}"
+then
+    echo `date`: Parsing compiler warnings 1>&2
+    cat ${root}/ari.compile | $AWK '
+BEGIN {
+    FS=":";
+}
+/^[^:]*:[0-9]*: warning:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  warning[file] += 1;
+}
+/^[^:]*:[0-9]*: error:/ {
+  file = $1;
+  #sub (/^.*\//, "", file);
+  error[file] += 1;
+}
+END {
+  for (file in warning) {
+    print file ":warning:" level[file]
+  }
+  for (file in error) {
+    print file ":error:" level[file]
+  }
+}
+' > ${root}/ari.warning.bug
+fi
+
+# THIS HAS SUFFERED BIT ROT
+if ${check_indent_p} && test -d "${srcdir}"
+then
+    printf "Analizing file indentation:" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while read f
+    do
+	if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s - ${f}
+	then
+	    :
+	else
+	    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	    echo "${f}:0: info: indent: Indentation does not match GNU indent output"
+	fi
+    done ) > ${wwwdir}/ari.indent.bug
+    echo ""
+fi
+
+if ${check_source_p} && test -d "${srcdir}"
+then
+    bugf=${wwwdir}/ari.source.bug
+    oldf=${wwwdir}/ari.source.old
+    srcf=${wwwdir}/ari.source.lines
+    oldsrcf=${wwwdir}/ari.source.lines-old
+
+    diff=${wwwdir}/ari.source.diff
+    diffin=${diff}-in
+    newf1=${bugf}1
+    oldf1=${oldf}1
+    oldpruned=${oldf1}-pruned
+    newpruned=${newf1}-pruned
+
+    cp -f ${bugf} ${oldf}
+    cp -f ${srcf} ${oldsrcf}
+    rm -f ${srcf}
+    node=`uname -n`
+    echo "`date`: Using source lines ${srcf}" 1>&2
+    echo "`date`: Checking source code" 1>&2
+    ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
+	xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --src=${srcf}
+    ) > ${bugf}
+    # Remove things we are not interested in to signal by email
+    # gdbarch changes are not important here
+    # Also convert ` into ' to avoid command substitution in script below
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
+    sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
+    # Remove line number info so that code inclusion/deletion
+    # has no impact on the result
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} > ${oldpruned}
+    sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} > ${newpruned}
+    # Use diff without option to get normal diff output that
+    # is reparsed after
+    diff ${oldpruned} ${newpruned} > ${diffin}
+    # Only keep new warnings
+    sed -n -e "/^>.*/p" ${diffin} > ${diff}
+    sedscript=${wwwdir}/sedscript
+    script=${wwwdir}/script
+    sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
+	sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
+	-e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
+	-e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
+	sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
+	sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
+	${diffin} > ${sedscript}
+    ${SHELL} ${sedscript} > ${wwwdir}/message
+    sed -n \
+	-e "s;\(.*\);echo \\\"\1\\\";p" \
+	-e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
+	-e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
+	${wwwdir}/message > ${script}
+    ${SHELL} ${script} > ${wwwdir}/mail-message
+    if [ "x${branch}" != "x" ]; then
+	email_suffix="`date` in ${branch}"
+    else
+	email_suffix="`date`"
+    fi
+
+fi
+
+
+
+
+if ${check_doschk_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking for doschk" 1>&2
+    rm -f "${wwwdir}"/ari.doschk.*
+    fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
+    fnchange_awk="${wwwdir}"/ari.doschk.awk
+    doschk_in="${wwwdir}"/ari.doschk.in
+    doschk_out="${wwwdir}"/ari.doschk.out
+    doschk_bug="${wwwdir}"/ari.doschk.bug
+    doschk_char="${wwwdir}"/ari.doschk.char
+
+    # Transform fnchange.lst into fnchange.awk.  The program DJTAR
+    # does a textual substitution of each file name using the list.
+    # Generate an awk script that does the equivalent - matches an
+    # exact line and then outputs the replacement.
+
+    sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" { print "\2"\; next\; };' \
+	< "${fnchange_lst}" > "${fnchange_awk}"
+    echo '{ print }' >> "${fnchange_awk}"
+
+    # Do the raw analysis - transform the list of files into the DJGPP
+    # equivalents putting it in the .in file
+    ( cd "${srcdir}" && find * \
+	-name '*.info-[0-9]*' -prune \
+	-o -name tcl -prune \
+	-o -name itcl -prune \
+	-o -name tk -prune \
+	-o -name libgui -prune \
+	-o -name tix -prune \
+	-o -name dejagnu -prune \
+	-o -name expect -prune \
+	-o -type f -print ) \
+    | $AWK -f ${fnchange_awk} > ${doschk_in}
+
+    # Start with a clean slate
+    rm -f ${doschk_bug}
+
+    # Check for any invalid characters.
+    grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    sed < ${doschk_char} >> ${doschk_bug} \
+	-e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
+
+    # Magic to map ari.doschk.out to ari.doschk.bug goes here
+    doschk < ${doschk_in} > ${doschk_out}
+    cat ${doschk_out} | $AWK >> ${doschk_bug} '
+BEGIN {
+    state = 1;
+    invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";  category[invalid_dos] = "dos";
+    same_dos = state++;    bug[same_dos]    = "DOS 8.3";                category[same_dos] = "dos";
+    same_sysv = state++;   bug[same_sysv]   = "SysV";
+    long_sysv = state++;   bug[long_sysv]   = "long SysV";
+    internal = state++;    bug[internal]    = "internal doschk";        category[internal] = "internal";
+    state = 0;
+}
+/^$/ { state = 0; next; }
+/^The .* not valid DOS/     { state = invalid_dos; next; }
+/^The .* same DOS/          { state = same_dos; next; }
+/^The .* same SysV/         { state = same_sysv; next; }
+/^The .* too long for SysV/ { state = long_sysv; next; }
+/^The .* /                  { state = internal; next; }
+
+NF == 0 { next }
+
+NF == 3 { name = $1 ; file = $3 }
+NF == 1 { file = $1 }
+NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
+
+state == same_dos {
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    print  file ":0: " category[state] ": " \
+	name " " bug[state] " " " dup: " \
+	" DOSCHK - the names " name " and " file " resolve to the same" \
+	" file on a " bug[state] \
+	" system.<br>For DOS, this can be fixed by modifying the file" \
+	" fnchange.lst."
+    next
+}
+state == invalid_dos {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  name ": DOSCHK - " name
+    next
+}
+state == internal {
+    # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
+    print file ":0: " category[state] ": "  bug[state] ": DOSCHK - a " \
+	bug[state] " problem"
+}
+'
+fi
+
+
+
+if ${check_werror_p} && test -d "${srcdir}"
+then
+    echo "`date`: Checking Makefile.in for non- -Werror rules"
+    rm -f ${wwwdir}/ari.werror.*
+    cat "${srcdir}/${project}/Makefile.in" | $AWK > ${wwwdir}/ari.werror.bug '
+BEGIN {
+    count = 0
+    cont_p = 0
+    full_line = ""
+}
+/^[-_[:alnum:]]+\.o:/ {
+    file = gensub(/.o:.*/, "", 1) ".c"
+}
+
+/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1; next; }
+cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
+
+/\$\(COMPILE\.pre\)/ {
+    print file " has  line " $0
+    if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~ /\$\(INTERNAL_CFLAGS\)/)) {
+	# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+	print "'"${project}"'/" file ":0: info: Werror: The file is not being compiled with -Werror"
+    }
+}
+'
+fi
+
+
+# From the warnings, generate the doc and indexed bug files
+if ${update_doc_p}
+then
+    cd ${wwwdir}
+    rm -f ari.doc ari.idx ari.doc.bug
+    # Generate an extra file containing all the bugs that the ARI can detect.
+    /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >> ari.doc.bug
+    cat ari.*.bug | $AWK > ari.idx '
+BEGIN {
+    FS=": *"
+}
+{
+    # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
+    file = $1
+    line = $2
+    category = $3
+    bug = $4
+    if (! (bug in cat)) {
+	cat[bug] = category
+	# strip any trailing .... (supplement)
+	doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
+	count[bug] = 0
+    }
+    if (file != "") {
+	count[bug] += 1
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	print bug ":" file ":" category
+    }
+    # Also accumulate some categories as obsolete
+    if (category == "deprecated") {
+	# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+	if (file != "") {
+	    print category ":" file ":" "obsolete"
+	}
+	#count[category]++
+	#doc[category] = "Contains " category " code"
+    }
+}
+END {
+    i = 0;
+    for (bug in count) {
+	# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+	print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
+    }
+}
+'
+fi
+
+
+# print_toc BIAS MIN_COUNT CATEGORIES TITLE
+
+# Print a table of contents containing the bugs CATEGORIES.  If the
+# BUG count >= MIN_COUNT print it in the table-of-contents.  If
+# MIN_COUNT is non -ve, also include a link to the table.Adjust the
+# printed BUG count by BIAS.
+
+all=
+
+print_toc ()
+{
+    bias="$1" ; shift
+    min_count="$1" ; shift
+
+    all=" $all $1 "
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    shift
+
+    title="$@" ; shift
+
+    echo "<p>" >> ${newari}
+    echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
+    echo "<h3>${title}</h3>" >> ${newari}
+    cat >> ${newari} # description
+
+    cat >> ${newari} <<EOF
+<p>
+<table>
+<tr><th align=left>BUG</th><th>Total</th><th align=left>Description</th></tr>
+EOF
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    cat ${wwwdir}/ari.doc \
+    | sort -t: +1rn -2 +0d \
+    | $AWK >> ${newari} '
+BEGIN {
+    FS=":"
+    '"$categories"'
+    MIN_COUNT = '${min_count}'
+    BIAS = '${bias}'
+    total = 0
+    nr = 0
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (count < MIN_COUNT) next
+    if (!(category in categories)) next
+    nr += 1
+    total += count
+    printf "<tr>"
+    printf "<th align=left valign=top><a name=\"%s\">", bug
+    printf "%s", gensub(/_/, " ", "g", bug)
+    printf "</a></th>"
+    printf "<td align=right valign=top>"
+    if (count > 0 && MIN_COUNT >= 0) {
+	printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
+    } else {
+	printf "%d", count + BIAS
+    }
+    printf "</td>"
+    printf "<td align=left valign=top>%s</td>", doc
+    printf "</tr>"
+    print ""
+}
+END {
+    print "<tr><th align=right valign=top>" nr "</th><th align=right valign=top>" total "</th><td></td></tr>"
+}
+'
+cat >> ${newari} <<EOF
+</table>
+<p>
+EOF
+}
+
+
+print_table ()
+{
+    categories=""
+    for c in $1; do
+	categories="${categories} categories[\"${c}\"] = 1 ;"
+    done
+    # Remember to prune the dir prefix from projects files
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
+function qsort (table,
+		middle, tmp, left, nr_left, right, nr_right, result) {
+    middle = ""
+    for (middle in table) { break; }
+    nr_left = 0;
+    nr_right = 0;
+    for (tmp in table) {
+	if (tolower(tmp) < tolower(middle)) {
+	    nr_left++
+	    left[tmp] = tmp
+	} else if (tolower(tmp) > tolower(middle)) {
+	    nr_right++
+	    right[tmp] = tmp
+	}
+    }
+    #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
+    result = ""
+    if (nr_left > 0) {
+	result = qsort(left) SUBSEP
+    }
+    result = result middle
+    if (nr_right > 0) {
+	result = result SUBSEP qsort(right)
+    }
+    return result
+}
+function print_heading (where, bug_i) {
+    print ""
+    print "<tr border=1>"
+    print "<th align=left>File</th>"
+    print "<th align=left><em>Total</em></th>"
+    print "<th></th>"
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th>"
+	# The title names are offset by one.  Otherwize, when the browser
+	# jumps to the name it leaves out half the relevant column.
+	#printf "<a name=\",%s\">&nbsp;</a>", bug
+	printf "<a name=\",%s\">&nbsp;</a>", i2bug[bug_i-1]
+	printf "<a href=\"#%s\">", bug
+	printf "%s", gensub(/_/, " ", "g", bug)
+	printf "</a>\n"
+	printf "</th>\n"
+    }
+    #print "<th></th>"
+    printf "<th><a name=\"%s,\">&nbsp;</a></th>\n", i2bug[bug_i-1]
+    print "<th align=left><em>Total</em></th>"
+    print "<th align=left>File</th>"
+    print "</tr>"
+}
+function print_totals (where, bug_i) {
+    print "<th align=left><em>Totals</em></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&gt;"
+    printf "</th>\n"
+    print "<th></th>";
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i];
+	printf "<th align=right>"
+	printf "<em>"
+	printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
+	printf "</em>";
+	printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
+	printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
+	printf "<a name=\"%s,%s\">&nbsp;</a>", where, bug
+	printf "</th>";
+	print ""
+    }
+    print "<th></th>"
+    printf "<th align=right>"
+    printf "<em>%s</em>", total
+    printf "&lt;"
+    printf "</th>\n"
+    print "<th align=left><em>Totals</em></th>"
+    print "</tr>"
+}
+BEGIN {
+    FS = ":"
+    '"${categories}"'
+    nr_file = 0;
+    nr_bug = 0;
+}
+{
+    # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
+    bug = $1
+    file = $2
+    category = $3
+    # Interested in this
+    if (!(category in categories)) next
+    # Totals
+    db[bug, file] += 1
+    bug_total[bug] += 1
+    file_total[file] += 1
+    total += 1
+}
+END {
+
+    # Sort the files and bugs creating indexed lists.
+    nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
+    nr_file = split(qsort(file_total), i2file, SUBSEP);
+
+    # Dummy entries for first/last
+    i2file[0] = 0
+    i2file[-1] = -1
+    i2bug[0] = 0
+    i2bug[-1] = -1
+
+    # Construct a cycle of next/prev links.  The file/bug "0" and "-1"
+    # are used to identify the start/end of the cycle.  Consequently,
+    # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
+    # of end is the start).
+
+    # For all the bugs, create a cycle that goes to the prev / next file.
+    for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	bug = i2bug[bug_i]
+	prev = 0
+	prev_file[bug, 0] = -1
+	next_file[bug, -1] = 0
+	for (file_i = 1; file_i <= nr_file; file_i++) {
+	    file = i2file[file_i]
+	    if ((bug, file) in db) {
+		prev_file[bug, file] = prev
+		next_file[bug, prev] = file
+		prev = file
+	    }
+	}
+	prev_file[bug, -1] = prev
+	next_file[bug, prev] = -1
+    }
+
+    # For all the files, create a cycle that goes to the prev / next bug.
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i]
+	prev = 0
+	prev_bug[file, 0] = -1
+	next_bug[file, -1] = 0
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i]
+	    if ((bug, file) in db) {
+		prev_bug[file, bug] = prev
+		next_bug[file, prev] = bug
+		prev = bug
+	    }
+	}
+	prev_bug[file, -1] = prev
+	next_bug[file, prev] = -1
+    }
+
+    print "<table border=1 cellspacing=0>"
+    print "<tr></tr>"
+    print_heading(0);
+    print "<tr></tr>"
+    print_totals(0);
+    print "<tr></tr>"
+
+    for (file_i = 1; file_i <= nr_file; file_i++) {
+	file = i2file[file_i];
+	pfile = gensub(/^'${project}'\//, "", 1, file)
+	print ""
+	print "<tr>"
+	print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&gt;</a>", file, next_bug[file, 0]
+	printf "</th>\n"
+	print "<th></th>"
+	for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
+	    bug = i2bug[bug_i];
+	    if ((bug, file) in db) {
+		printf "<td align=right>"
+		printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
+		printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
+		printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
+		printf "<a name=\"%s,%s\">&nbsp;</a>", file, bug
+		printf "</td>"
+		print ""
+	    } else {
+		print "<td>&nbsp;</td>"
+		#print "<td></td>"
+	    }
+	}
+	print "<th></th>"
+	printf "<th align=right>"
+	printf "%s", file_total[file]
+	printf "<a href=\"#%s,%s\">&lt;</a>", file, prev_bug[file, -1]
+	printf "</th>\n"
+	print "<th align=left>" pfile "</th>"
+	print "</tr>"
+    }
+
+    print "<tr></tr>"
+    print_totals(-1)
+    print "<tr></tr>"
+    print_heading(-1);
+    print "<tr></tr>"
+    print ""
+    print "</table>"
+    print ""
+}
+'
+}
+
+
+# Make the scripts available
+cp ${aridir}/gdb_*.sh ${wwwdir}
+
+# Compute the ARI index - ratio of zero vs non-zero problems.
+indexes=`${AWK} '
+BEGIN {
+    FS=":"
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1; count = $2; category = $3; doc = $4
+
+    if (bug ~ /^legacy_/) legacy++
+    if (bug ~ /^deprecated_/) deprecated++
+
+    if (category !~ /^gdbarch$/) {
+	bugs += count
+    }
+    if (count == 0) {
+	oks++
+    }
+}
+END {
+    #print "tests/ok:", nr / ok
+    #print "bugs/tests:", bugs / nr
+    #print "bugs/ok:", bugs / ok
+    print bugs / ( oks + legacy + deprecated )
+}
+' ${wwwdir}/ari.doc`
+
+# Merge, generating the ARI tables.
+if ${update_web_p}
+then
+    echo "Create the ARI table" 1>&2
+    oldari=${wwwdir}/old.html
+    ari=${wwwdir}/index.html
+    newari=${wwwdir}/new.html
+    rm -f ${newari} ${newari}.gz
+    cat <<EOF >> ${newari}
+<html>
+<head>
+<title>A.R. Index for GDB version ${version}</title>
+</head>
+<body>
+
+<center><h2>A.R. Index for GDB version ${version}<h2></center>
+
+<!-- body, update above using ../index.sh -->
+
+<!-- Navigation.  This page contains the following anchors.
+"BUG": The definition of the bug.
+"FILE,BUG": The row/column containing FILEs BUG count
+"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
+"FILE,O", "FILE,-1": The left/right total for FILEs row.
+",BUG": The top title for BUGs column.
+"FILE,": The left title for FILEs row.
+-->
+
+<center><h3>${indexes}</h3></center>
+<center><h3>You can not take this seriously!</h3></center>
+
+<center>
+Also available:
+<a href="../gdb/ari/">most recent branch</a>
+|
+<a href="../gdb/current/ari/">current</a>
+|
+<a href="../gdb/download/ari/">last release</a>
+</center>
+
+<center>
+Last updated: `date -u`
+</center>
+EOF
+
+    print_toc 0 1 "internal regression" Critical <<EOF
+Things previously eliminated but returned.  This should always be empty.
+EOF
+
+    print_table "regression code comment obsolete gettext"
+
+    print_toc 0 0 code Code <<EOF
+Coding standard problems, portability problems, readability problems.
+EOF
+
+    print_toc 0 0 comment Comments <<EOF
+Problems concerning comments in source files.
+EOF
+
+    print_toc 0 0 gettext GetText <<EOF
+Gettext related problems.
+EOF
+
+    print_toc 0 -1 dos DOS 8.3 File Names <<EOF
+File names with problems on 8.3 file systems.
+EOF
+
+    print_toc -2 -1 deprecated Deprecated <<EOF
+Mechanisms that have been replaced with something better, simpler,
+cleaner; or are no longer required by core-GDB.  New code should not
+use deprecated mechanisms.  Existing code, when touched, should be
+updated to use non-deprecated mechanisms.  See obsolete and deprecate.
+(The declaration and definition are hopefully excluded from count so
+zero should indicate no remaining uses).
+EOF
+
+    print_toc 0 0 obsolete Obsolete <<EOF
+Mechanisms that have been replaced, but have not yet been marked as
+such (using the deprecated_ prefix).  See deprecate and deprecated.
+EOF
+
+    print_toc 0 -1 deprecate Deprecate <<EOF
+Mechanisms that are a candidate for being made obsolete.  Once core
+GDB no longer depends on these mechanisms and/or there is a
+replacement available, these mechanims can be deprecated (adding the
+deprecated prefix) obsoleted (put into category obsolete) or deleted.
+See obsolete and deprecated.
+EOF
+
+    print_toc -2 -1 legacy Legacy <<EOF
+Methods used to prop up targets using targets that still depend on
+deprecated mechanisms. (The method's declaration and definition are
+hopefully excluded from count).
+EOF
+
+    print_toc -2 -1 gdbarch Gdbarch <<EOF
+Count of calls to the gdbarch set methods.  (Declaration and
+definition hopefully excluded from count).
+EOF
+
+    print_toc 0 -1 macro Macro <<EOF
+Breakdown of macro definitions (and #undef) in configuration files.
+EOF
+
+    print_toc 0 0 regression Fixed <<EOF
+Problems that have been expunged from the source code.
+EOF
+
+    # Check for invalid categories
+    for a in $all; do
+	alls="$alls all[$a] = 1 ;"
+    done
+    cat ari.*.doc | $AWK >> ${newari} '
+BEGIN {
+    FS = ":"
+    '"$alls"'
+}
+{
+    # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
+    bug = $1
+    count = $2
+    category = $3
+    doc = $4
+    if (!(category in all)) {
+	print "<b>" category "</b>: no documentation<br>"
+    }
+}
+'
+
+    cat >> ${newari} <<EOF
+<center>
+Input files:
+`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<center>
+Scripts:
+`( cd ${wwwdir} && ls *.sh ) | while read f
+do
+    echo "<a href=\"${f}\">${f}</a>"
+done`
+</center>
+
+<!-- /body, update below using ../index.sh -->
+</body>
+</html>
+EOF
+
+    for i in . .. ../..; do
+	x=${wwwdir}/${i}/index.sh
+	if test -x $x; then
+	    $x ${newari}
+	    break
+	fi
+    done
+
+    gzip -c -v -9 ${newari} > ${newari}.gz
+
+    cp ${ari} ${oldari}
+    cp ${ari}.gz ${oldari}.gz
+    cp ${newari} ${ari}
+    cp ${newari}.gz ${ari}.gz
+
+fi # update_web_p
+
+# ls -l ${wwwdir}
+
+exit 0

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-09-26 22:15                   ` [RFA-v5] " Pierre Muller
@ 2012-10-08 21:21                     ` Pierre Muller
  2012-10-09  3:45                       ` Sergio Durigan Junior
  2012-10-22 21:04                     ` Joel Brobecker
  1 sibling, 1 reply; 32+ messages in thread
From: Pierre Muller @ 2012-10-08 21:21 UTC (permalink / raw)
  To: gdb-patches

Nobody replied to my last RFA...

Is anyone still interested in inserting ARI into GDB sources?

Pierre Muller

> -----Message d'origine-----
> De : gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Pierre Muller
> Envoyé : jeudi 27 septembre 2012 00:15
> À : gdb-patches@sourceware.org
> Objet : [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari
> directory
> 
>   Here is again my patch to include
> ARI web page creation into gdb/contrib directory
>   This is almost v4 except that
> the ChangeLog entry is directly in gdb directory
> as most people seemed to be opposed to creating a ChangeLog file in
> contrib. subdirectory.
> 
>   Joel made some suggestions about changing create-web-ari-in-src.sh
> in order to create all files directly in the same directory,
> but these script generate a lot of "useless" files
> and having them together with the cvs files still worries me.
> 
> 
> Pierre Muller
> GDB pascal language maintainer
> 
> 
> gdb/ChangeLog entry:
> 
> 2012-09-27  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
>         Incorporate ARI web page generator into GDB sources.
>         * contrib/ari/create-web-ari-in-src.sh: New file.
>         * contrib/ari/gdb_ari.sh: New file.
>         * contrib/ari/gdb_find.sh: New file.
>         * contrib/ari/update-web-ari.sh: New file.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-10-08 21:21                     ` Pierre Muller
@ 2012-10-09  3:45                       ` Sergio Durigan Junior
  2012-10-09  5:52                         ` Joel Brobecker
  0 siblings, 1 reply; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-10-09  3:45 UTC (permalink / raw)
  To: Pierre Muller; +Cc: gdb-patches

On Monday, October 08 2012, Pierre Muller wrote:

> Nobody replied to my last RFA...
>
> Is anyone still interested in inserting ARI into GDB sources?

Hey Pierre,

I think we are still interested (I am, at least!).  Sorry, I could not
really figure what is missing to review on your patch.  You gave an
explanation on why you did not follow Joel's advice, and his silence
tells me that he's probably OK with your reasons.  So, is there anything
else you want to be reviewed?

I glanced over the scripts once more (didn't really have time to do a
deep review this time, sorry), and could not find anything that caught
my attention.  I also think that (a) they went through a lot of testing
and customization during all those RFA's, and (b) they are not really
critical to the project, so if there's something wrong with them we can
fix without having to worry much.

Anyway, with all that said, I think that they are pretty much OK for
pushing into the repository.  But please keep in mind that I am not a
maintainer :-).

Regards,

-- 
Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-10-09  3:45                       ` Sergio Durigan Junior
@ 2012-10-09  5:52                         ` Joel Brobecker
  0 siblings, 0 replies; 32+ messages in thread
From: Joel Brobecker @ 2012-10-09  5:52 UTC (permalink / raw)
  To: Sergio Durigan Junior; +Cc: Pierre Muller, gdb-patches

Sorry guys, I'm just not available for reviews, this week. But I
definitely plan on reviewing it! I'm OK with Pierre's reasons
regarding not following my advice. I just need to find a little
bit of time to remember past discussions and look at the patches
again. That being said: It does not have to be me who does the
review. Anyone else, please feel free to take over.

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-09-26 22:15                   ` [RFA-v5] " Pierre Muller
  2012-10-08 21:21                     ` Pierre Muller
@ 2012-10-22 21:04                     ` Joel Brobecker
  2012-10-23 15:21                       ` Pierre Muller
  2012-10-23 17:13                       ` Sergio Durigan Junior
  1 sibling, 2 replies; 32+ messages in thread
From: Joel Brobecker @ 2012-10-22 21:04 UTC (permalink / raw)
  To: Pierre Muller; +Cc: gdb-patches

Sorry about the delay, Pierre,

>   Joel made some suggestions about changing create-web-ari-in-src.sh
> in order to create all files directly in the same directory,
                                               ^^^^ current working
> but these script generate a lot of "useless" files
> and having them together with the cvs files still worries me.

I think we should make sure that these scripts are callable from
out-of-tree, just the same way we can build GDB out-of-tree (which
is the recommended way to building a lot the GNU software).

But I'm fine with the current version. Let's make progress rather
than bikeshedding further :).

> 2012-09-27  Pierre Muller  <muller@ics.u-strasbg.fr>
> 
>         Incorporate ARI web page generator into GDB sources.
>         * contrib/ari/create-web-ari-in-src.sh: New file.
>         * contrib/ari/gdb_ari.sh: New file.
>         * contrib/ari/gdb_find.sh: New file.
>         * contrib/ari/update-web-ari.sh: New file.

OK for me. I'm not sure if anyone else want to comment before
we go ahead?

-- 
Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-10-22 21:04                     ` Joel Brobecker
@ 2012-10-23 15:21                       ` Pierre Muller
  2012-10-23 17:13                       ` Sergio Durigan Junior
  1 sibling, 0 replies; 32+ messages in thread
From: Pierre Muller @ 2012-10-23 15:21 UTC (permalink / raw)
  To: 'Joel Brobecker'; +Cc: gdb-patches

 Hi Joel,

  I can only confirm that it is indeed possible to 
launch the script from the build directory,
and that this only generates new files in sub-directories of the gdb build
directory.

  Of course, there is nothing yet in the gdb/Makefile.in
to support this, but my idea was to submit that later anyhow.

Pierre Muller

> -----Message d'origine-----
> De : gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Joel Brobecker
> Envoyé : lundi 22 octobre 2012 23:04
> À : Pierre Muller
> Cc : gdb-patches@sourceware.org
> Objet : Re: [RFA-v5] Add scripts to generate ARI web pages to
> gdb/contrib/ari directory
> 
> Sorry about the delay, Pierre,
> 
> >   Joel made some suggestions about changing create-web-ari-in-src.sh
> > in order to create all files directly in the same directory,
>                                                ^^^^ current working
> > but these script generate a lot of "useless" files
> > and having them together with the cvs files still worries me.
> 
> I think we should make sure that these scripts are callable from
> out-of-tree, just the same way we can build GDB out-of-tree (which
> is the recommended way to building a lot the GNU software).
> 
> But I'm fine with the current version. Let's make progress rather
> than bikeshedding further :).
> 
> > 2012-09-27  Pierre Muller  <muller@ics.u-strasbg.fr>
> >
> >         Incorporate ARI web page generator into GDB sources.
> >         * contrib/ari/create-web-ari-in-src.sh: New file.
> >         * contrib/ari/gdb_ari.sh: New file.
> >         * contrib/ari/gdb_find.sh: New file.
> >         * contrib/ari/update-web-ari.sh: New file.
> 
> OK for me. I'm not sure if anyone else want to comment before
> we go ahead?
> 
> --
> Joel


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-10-22 21:04                     ` Joel Brobecker
  2012-10-23 15:21                       ` Pierre Muller
@ 2012-10-23 17:13                       ` Sergio Durigan Junior
  2012-10-30 15:16                         ` Pierre Muller
       [not found]                         ` <4332.56673063642$1351610219@news.gmane.org>
  1 sibling, 2 replies; 32+ messages in thread
From: Sergio Durigan Junior @ 2012-10-23 17:13 UTC (permalink / raw)
  To: Joel Brobecker; +Cc: Pierre Muller, gdb-patches

On Monday, October 22 2012, Joel Brobecker wrote:

> But I'm fine with the current version. Let's make progress rather
> than bikeshedding further :).
[...]
> OK for me. I'm not sure if anyone else want to comment before
> we go ahead?

FWIW, that's my opinion as well.  The patch has been reviewed many
times, and it works overall, so I believe it's time to check it in and
then continue improving it.

-- 
Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-10-23 17:13                       ` Sergio Durigan Junior
@ 2012-10-30 15:16                         ` Pierre Muller
       [not found]                         ` <4332.56673063642$1351610219@news.gmane.org>
  1 sibling, 0 replies; 32+ messages in thread
From: Pierre Muller @ 2012-10-30 15:16 UTC (permalink / raw)
  To: 'Sergio Durigan Junior', 'Joel Brobecker'; +Cc: gdb-patches

  Hi all,

  may I consider that, as only two persons replied,
that it is OK to check version 5 in?

Pierre Muller

> -----Message d'origine-----
> De : Sergio Durigan Junior [mailto:sergiodj@redhat.com]
> Envoyé : mardi 23 octobre 2012 19:14
> À : Joel Brobecker
> Cc : Pierre Muller; gdb-patches@sourceware.org
> Objet : Re: [RFA-v5] Add scripts to generate ARI web pages to
> gdb/contrib/ari directory
> 
> On Monday, October 22 2012, Joel Brobecker wrote:
> 
> > But I'm fine with the current version. Let's make progress rather
> > than bikeshedding further :).
> [...]
> > OK for me. I'm not sure if anyone else want to comment before
> > we go ahead?
> 
> FWIW, that's my opinion as well.  The patch has been reviewed many
> times, and it works overall, so I believe it's time to check it in and
> then continue improving it.
> 
> --
> Sergio


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
       [not found]                         ` <4332.56673063642$1351610219@news.gmane.org>
@ 2012-11-01 19:57                           ` Tom Tromey
  2012-11-01 22:39                             ` Pierre Muller
  0 siblings, 1 reply; 32+ messages in thread
From: Tom Tromey @ 2012-11-01 19:57 UTC (permalink / raw)
  To: Pierre Muller
  Cc: 'Sergio Durigan Junior', 'Joel Brobecker', gdb-patches

>>>>> "Pierre" == Pierre Muller <pierre.muller@ics-cnrs.unistra.fr> writes:

Pierre>   may I consider that, as only two persons replied,
Pierre> that it is OK to check version 5 in?

Please go ahead.

thanks,
Tom


^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [RFA-v5] Add scripts to generate ARI web pages to gdb/contrib/ari directory
  2012-11-01 19:57                           ` Tom Tromey
@ 2012-11-01 22:39                             ` Pierre Muller
  0 siblings, 0 replies; 32+ messages in thread
From: Pierre Muller @ 2012-11-01 22:39 UTC (permalink / raw)
  To: 'Tom Tromey'
  Cc: 'Sergio Durigan Junior', 'Joel Brobecker', gdb-patches

> De : gdb-patches-owner@sourceware.org
[mailto:gdb-patches-owner@sourceware.org] De la part de Tom Tromey
> Envoyé : jeudi 1 novembre 2012 20:58
> À : Pierre Muller
> Cc : 'Sergio Durigan Junior'; 'Joel Brobecker'; gdb-patches@sourceware.org
> Objet : Re: [RFA-v5] Add scripts to generate ARI web pages to
gdb/contrib/ari directory

>>>>> "Pierre" == Pierre Muller <pierre.muller@ics-cnrs.unistra.fr> writes:

> Pierre>   may I consider that, as only two persons replied, that it is 
> Pierre> OK to check version 5 in?

> Please go ahead.

Thank you for the approval,
I checked the files in 
http://sourceware.org/ml/gdb-cvs/2012-11/msg00003.html

and already sent two 
new RFA concerning problems related to the use of these
files.

Pierre Muller


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2012-11-01 22:39 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-18 22:41 [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory Pierre Muller
2012-05-25  8:09 ` PING " Pierre Muller
2012-05-25 19:47 ` Jan Kratochvil
2012-05-26 12:41   ` [RFA-v2] " Pierre Muller
2012-05-27  4:06     ` Sergio Durigan Junior
2012-05-27 19:53       ` Pierre Muller
2012-05-27 22:03         ` Sergio Durigan Junior
2012-05-28 18:34           ` [RFA-v3] " Pierre Muller
2012-05-28 18:38             ` Pierre Muller
2012-05-29 13:02             ` Joel Brobecker
2012-05-29 13:13               ` Pedro Alves
2012-05-31  6:56                 ` Pierre Muller
2012-05-31 15:59                   ` Joel Brobecker
2012-06-14 12:36                 ` [RFA-v4] " Pierre Muller
2012-06-14 16:02                   ` Joel Brobecker
2012-06-14 16:14                     ` Pierre Muller
2012-06-14 16:22                       ` Joel Brobecker
2012-08-21 10:27                         ` About RFA for " Pierre Muller
2012-08-21 22:36                           ` Sergio Durigan Junior
     [not found]                         ` <50336283.a2db440a.600c.105dSMTPIN_ADDED@mx.google.com>
2012-08-21 22:25                           ` Doug Evans
2012-09-26 22:15                   ` [RFA-v5] " Pierre Muller
2012-10-08 21:21                     ` Pierre Muller
2012-10-09  3:45                       ` Sergio Durigan Junior
2012-10-09  5:52                         ` Joel Brobecker
2012-10-22 21:04                     ` Joel Brobecker
2012-10-23 15:21                       ` Pierre Muller
2012-10-23 17:13                       ` Sergio Durigan Junior
2012-10-30 15:16                         ` Pierre Muller
     [not found]                         ` <4332.56673063642$1351610219@news.gmane.org>
2012-11-01 19:57                           ` Tom Tromey
2012-11-01 22:39                             ` Pierre Muller
2012-06-22 16:10                 ` [RFA-v3] " Tom Tromey
2012-05-26  0:12 ` [RFA] " Sergio Durigan Junior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox