7 News Support Utilities

7.4 Miscellaneous maintenance utilities

This section describes those programs supplied in the News distribution which are designed to ease some common tasks which crop up in news management. Feel free to use them or not, as your situation warrants.

7.4.1 BatchNews

BatchNews is a utility designed to convert groups of text files into rnews batch format, so they can be easily fed into news traffic, archived, etc. Each file in the input list is added to the batch as a single newsitem.

7.4.1.1 Installation

BatchNews.Exe is created in the News_Dist directory by NewsBuild.Com. You may place it in any convenient location (e.g. News_Manager).

7.4.1.2 Command syntax

BatchNews should be invoked via a foreign command; it will obtain and parse its own command line. It behaves according to DCL syntax, and accepts the following parameters and qualifiers:

$ BatchNews/Before=beftime/Since=aftime/NPrefix/Log -
/Size=batchsize/Max_Size=totalsize/BatchName=outname -
/FileList=listfile input_list

input_list -- A comma-separated list of input file specifications to be processed. Wildcards are allowed in any of the file specifications.

/Before=beftime -- Specifies that files created before beftime are to be processed. Beftime should be a valid VMS absolute time. This qualifier cannot be negated.

/Since=aftime -- Specifies that files created after aftime are to be processed. Aftime should be a valid VMS absolute time. This qualifier cannot be negated.

/NPrefix -- Causes the character 'N' to be prepended to each line of the output batch file. /NoNPrefix cancels the effect of a previous /NPrefix.

/Log -- Causes the names and sizes of input files processed to be displayed. In addition, if multiple output batch files are created, the name and size of each batch file is displayed when it is closed. /NoLog cancels the effect of a previous /Log.

/Size=batchsize -- Specifies that a new batch file should be created when size of the current batch file exceeds batchsize bytes. BatchNews finishes writing the current input file before creating a new batch file, so newsitems do not span batch files. Successive batch files have the name outnameNNN, where NNN is a three digit decimal number beginning at 001 and increasing by one for each batch file. If files with names outname001 to outnameXXX already exist, then BatchNews starts the current set of batch files at outnameNNN, where NNN = XXX + 1. Batchsize may be a decimal number, an octal number beginning with '%O', or a hexadecimal number beginning with '%X'. /NoSize cancels the effect of a previous /Size. This qualifier is ignored if the batch is being written to Sys$Output.

/Max_Size=totalsize -- Specifies that all processing of input files should stop after totalsize bytes have been written to batch files. BatchNews finishes writing the current input file before stopping. Totalsize may be a decimal number, an octal number beginning with '%O', or a hexadecimal number beginning with '%X'. /NoMax_Size cancels the effect of a previous /Max_Size.

/BatchName=outname -- Specifies the name template of the output batch file. If /BatchName is not specified, the output batch is written to Sys$Output. When writing a batch to Sys$Output, only a single output file (not limited in size) will be produced. This qualifier cannot be negated.

/FileList=listfile -- Specifies that the names of input files are contained in listfile. Listfile should be a text file containing one input file specification per line. Wildcards are not allowed. If files are specified both via input_list and listfile, the files in input_list are processed first, followed by those in listfile. If batching is limited by totalsize while files still remain in listfile, a new version of listfile is created, which contains the remaining (unprocessed) names from the original listfile.

/Help -- Displays a summary of BatchNews command syntax on Sys$Error and then exits.

7.4.1.3 Session Summary

When invoked, BatchNews simply reads each file matching the input file specification, and copies it into the output file, adding an appropriate rnews line before the file. The contents of the input file are not checked or modified in any way. This process continues until the list of input files is exhausted, or until the number of bytes specified with the /Max_Size qualifier has been exceeded. Output is directed to Sys$Output, unless the /BatchName qualifier is specified, in which case its value is the output file name. If the /Size qualifier is specified, several output files may be created, with names derived as described above.

Note that if the size specified by /Size or /Max_Size is exceeded, BatchNews does not take action until it has finished processing the current input file. This means that output files may be larger than the value specified by /Size, and more data may be copied than specified by /Max_Size. Therefore, you should consider these qualifiers guidelines to be used by BatchNews, rather than hard upper limits.

7.4.1.4 Examples

The following example combines all files with type .Rno in the current directory into batch files with the base name Runoff.Batch. When a batch file exceeds 51000 bytes (~100 disk blocks), subsequent input files are placed in a new batch file. If more than one batch file is generated, their names will be Runoff.Batch001, Runoff.Batch002, Runoff.Batch003, and so on.
$ BatchNews/BatchName=Runoff.Batch/Size=51000 *.Rno

The following example copies the files named in BatchList.Txt into an rnews batch, which is written to Sys$Output (e.g. the log file of a batch job).

$ Create BatchList.Txt
Project_Common:Stats.Today
Project_Common:Transaction_List.Rpt
Project_Admin:BugFix.InProgress
<Ctrl-Z>
$ BatchNews/FileList=BatchList.Txt/Max_Size=20000

If, say, Stats.Today is 2 kB and Transaction_List.Rpt is 25 kB, BatchNews would not process BugFix.InProgress, but would instead create a new version of BatchList.Txt containing only BugFix.InProgress.

7.4.2 Cachem

If you are using NNTP message ID caching, the Cachem utility allows you to examine and modify the message ID cache file. It is used principally for debugging the cache code. Under normal circumstances, the cache takes care of itself, so you do not need to run this program unless you develop a problem with the cache, or you're curious about its innards.

In order to run Cachem, you must have SysLck and SysGbl privileges set.

7.4.2.1 Installation

Cachem.Exe is created in the News_Dist directory by NewsBuild.Com. You may place it in any convenient location (e.g. News_Manager).

7.4.2.2 Command syntax

Cachem is invoked using the DCL Run command; it takes no parameters or qualifiers. At the prompt
Option (h for Help)?
it accepts the following single character commands:
A -- Add a message ID to the cache. Cachem will prompt for the message ID to add.
C -- List the number of message IDs inserted into the cache or rejected because they were already in the cache.
D -- Dump the contents of the cache for a range of entry numbers. Cachem will prompt for the first and last index (entry number) in the range.
E -- Exit Cachem.
F -- Find a message ID in the cache. Cachem will prompt for the message ID to find, and will display a message indicating whether the message ID was found in the cache.
L -- List a range of message IDs to a file. Cachem will prompt for a file name, and the first and last indices (entry numbers) in the range you wish to list. The resulting file is a normal (Stream-LF) text file with 1 message ID per line.
R -- Reset the insertion/rejection counters used to generate the counts displayed by the C command.
S -- Search for a hash list greater than a certain depth. Cachem prompts for the hash list index at which to start and the list depth, and prints the first list found after the index given whose depth is greater than or equal to the depth value given.
V -- Verify integrity of the cache and print statistics. If an error is found in any of the cache entries, a message is displayed. Once all entries have been searched, statistics describing the cache are displayed (see example below). As it examines cache entries, it displays a running count of its progress.
Z -- Zero the cache. This deletes all entries, and leaves the cache empty.

7.4.2.3 Session summary

When Cachem starts up, it writes an entry indicating this to the cache error log (News_Manager:News_Cache.Log). After that, it's pretty straightforward - Cachem prompts for a command, and then does what you ask. When you are finished, it writes an entry to the cache error log indicating that it is exiting. This allows you to keep track of when Cachem was run, and potentially used to reset the cache.

In order to understand the statistics Cachem displays, it helps to know a little about how the message ID cache works. When a message ID string is passed to the cache code, it generates a 32 bit hash value from the string, and uses the low order 14 bits of this hash value as an index into an array of linked lists containing the message IDs in the cache with that hash list index value. It then walks the list in order to determine whether the message ID string it is checking is already in the cache. If it was asked to find the message ID string, it will just return 1 if the ID string is present, or 0 if it is absent. If it was asked to add the message ID string to the cache, and the ID is not already present, it will add a new entry to the appropriate linked list and increment the insertion counter; if the ID was already present, it does not reinsert it, but increments the rejection counter. As new IDs are added to the cache, old ones are removed to make space for them.

When generating statistics in response to the V command, it checks each list, and displays the total number of lists, as well as information on the distribution of list depth (the number of entries in a given hash list). In addition it displays the number and percentage of multiple links (i.e. the number of items which are not the first element in their hash list), and the number and percentage of collisions (i.e. the number of times the code has had to compare a string to an existing entry and move on, instead of finding it immediately when it looks up the hash index).

7.4.2.4 Examples

The following example uses Cachem to obtain information about the usage of the cache:
$ Run [News_Dist]Cachem
Option (h for Help)? C
Number of insertions     = 15695
Number of rejections     = 16155
Percentage of rejects    =  50.7
Current index in cache   = 7505
Date counts last reset   =  6-JUL-1993 16:06:40.36
Option (h for Help)? V
Invalid back pointer in hash index 13250 (ptr=318) :
- bptr = 0, should be 5414
16000...


Summary :
---------
Number of terminated hash linked lists = 16383
Number of non-terminated hash linked lists = 0
Number of zero-length hash linked lists = 9872
Longest hash linked list has depth = 5
Average hash linked list depth = 1.258025
Total number of links = 8191
Number of used hash links = 8191

Histogram of hash linked list depths : 
Depth = 0 : 9872
Depth = 1 : 5081
Depth = 2 : 1213
Depth = 3 : 187
Depth = 4 : 27
Depth = 5 : 3
Number of multiple links = 1430
Percentage of multiple links = 17.458186
Number of collisions = 1680
Percentage of collisions = 20.510316
Option (h for Help)? E
$

This example shows how to check whether a particular message ID is present in the cache:

$ Run [News_Dist]Cachem
Option (h for Help)? F
Id? <2lfvt6$3n6@net.bio.net>
Found in cache list
Option (h for Help)? F
Id? <carthago$delenda$est@senatus.roma.org>
Not found in cache list
Option (h for Help)? e
$

7.4.3 FeedCheck

FeedCheck examines the logs of your NewsAdd job and produces summaries of traffic to and from adjacent nodes. In addition, it can generate statistics describing items junked during Add File processing.

As of the current release, a few minor bugs remain in FeedCheck, so you may encounter some spurious error messages, but it's still a very useful tool for quickly getting an overview of news traffic.

7.4.3.1 Installation

FeedCheck.Exe is created in the News_Dist directory by NewsBuild.Com. You may place it in any convenient location (e.g. News_Manager).

7.4.3.2 Command syntax

FeedCheck should be invoked via a foreign command; it will obtain and parse its own command line. It behaves according to DCL syntax, and accepts the following parameters and qualifiers:

$ FeedCheck/HostName=remhost/Junk/Verbose=msglvl -
/Before=beftime/Since=aftime/Output=outfile input_list

input_list -- A comma-separated list of input file specifications to be processed. Wildcards are allowed in any of the file specifications. This parameter is required.

/Hostname=remhost -- Default host to use as the source of incoming items from batches which do not include a host name. At present this qualifier is not used by FeedCheck, and remhost is ignored. This qualifier cannot be negated.

/Junk -- Directs FeedCheck to print a summary of items which have been junked during Add File processing, including newsgroups in the headers of junked items, and the sites from which you received them.

/Verbose=msglvl -- Indicates that FeedCheck should print status messages as it processes input files. The value of msglvl is interpreted as follows:
Msglvl Effect
1 Print a summary of the disposition of items to and from
each site in each file processed, as well as printing an
overall summary for each site.
2 Print a status message each time a new input file is
processed ("Processing Log File:") and each time the log
entries for a new batch of news items is processed
("Analyzing Log for Batch:"). In addition, the summary
report for each log file shows the name of the file, the
number of lines in the file, and the number of lines which
could not be parsed.
3 Print a status message each time a new input file
specification matching input_list is checked
("Checking File:"). This differs from the
"Processing Log File:" message in that the latter is printed
only when a file is actually examined and its contents
included in the statistics generated by FeedCheck(i.e. is
not excluded by beftime or aftime, and can be
read successfully).
The effects of each level are cumulative, that is, specifying a msglvl of 2 produces both the effects for level 1 and level 2, etc. If msglvl is not specified with /Verbose, it defaults to 1.

/Before=beftime -- Specifies that files created before beftime are to be processed. Beftime should be a valid VMS absolute time. This qualifier cannot be negated.

/Since=aftime -- Specifies that files created after aftime are to be processed. Aftime should be a valid VMS absolute time. This qualifier cannot be negated.

/Output=outfile -- Directs FeedCheck to write results to outfile, instead of Sys$Output.

/Help -- Displays a summary of FeedCheck command syntax on Sys$Error and then exits. You must provide input_list on the command line when using this qualifier, but it is ignored. This qualifier cannot be negated.

7.4.3.3 Session Summary

FeedCheck scans each file matching the criteria set forth in input_list, beftime and aftime, looking for the output produced by News' Add File command (which explains why it's usually run on log files from your NewsAdd job). When it finds these messages, it uses them to compile statistics about the number and size of items received from upstream sites, what happened to those items during Add File processing, and what was forwarded to downstream sites. Once all input files have been read, it displays a table on Sys$Output, which lists the following statistics, by site: You can get additional information, or direct the output to a file, using the command line qualifiers described above.

There are a few minor bugs remaining in FeedCheck, so don't be surprised if you see a number of messages about unexpected host changes. Also, since it only prints statistics for upstream sites sending you more than one batch, the totals for received items shown at the bottom of the report may not equal the sum of the individual entries displayed. These idiosyncrasies aside, FeedCheck is a very nice way to keep up with traffic at a glance. For instance, you may want to do something like

$ FeedCheck/Before=Today/Since=Yesterday/Junk -
           /Output=Sys$Scratch:FeedCheck.Tmp -
           News_Manager_Dev:[Log]NewsAdd.Log;*
$ subj = "FeedCheck daily report for " + -
         F$CvTime("Yesterday","Absolute","Date")
$ Mail/Subject="''subj'" Sys$Scratch:FeedCheck.Tmp Me
$ Delete/NoLog/NoConfirm Sys$Scratch:FeedCheck.Tmp;*

every day, so you'll develop a feel for traffic patterns, and spot any problems quickly.

7.4.3.4 Examples

The following example generates a basic traffic report from the previous day's NewsAdd logs for node SENATUS (ca. 190 B.C):
$ FeedCheck/Before=Today/Since=Yesterday/Junk -
News_Manager_Dev:[Log]NewsAdd.Log;*

The output from this command might be something like this (the table has been wrapped to fit within the margins of this document):

Site:     Files:Accept: Dups:Junked:Reject:Total:
===================================================
carthago  2(2.5K)    2     .      .     .      2   
causae    3(2.5M)  717     .      .     .    717   
censores       .     .     .      .     .      .   
scipio         .     .     .      .     .      .   
                                                   
            Files:Accept: Dups:Junked:Reject:Total:
TOTAL:   72(2.5M)  720     .      .     .    720   

 %Bad:%TAcpt: I've: Xmit:Bytes:
=============================  
   .   0.3%     .     .     .  
   .  99.6%     .     .     .  
   .      .     .   717  2.5M  
   .      .     2     .     .  
                               
 %Bad:%TAcpt: I've: Xmit:Bytes:
   .      .    2.   717  2.5M

The following example shows the result of scanning the same logs with this slightly modified command (again, the table has been wrapped to fit within the margins):
$ FeedCheck/Before=Today/Since=Yesterday/Junk/Verbose=3 -
News_Manager_Dev:[Log]NewsAdd.Log;*

The output in this case might be something like:

Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;143
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;142
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;141
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;141
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;140
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;140
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;139
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;139
Analyzing Log for Batch: USER1:[NEWS]NNTP_900308171410_8C27.BATCH;1
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;138
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;138
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;137
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;137
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;136
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;136
Analyzing Log for Batch: USER1:[NEWS]NEWS_LOCAL.BATCH;1
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;135
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;135
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;134
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;134
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;133
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;133
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;132
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;132
Analyzing Log for Batch: USER1:[NEWS]NNTP_900308101817_863D.BATCH;1
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;131
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;131
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;130
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;130
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;129
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;129
Analyzing Log for Batch: USER1:[NEWS]NNTP_900308034825_823A.BATCH;1
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;128
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;128
Analyzing Log for Batch: USER1:[NEWS]NEWS_LOCAL.BATCH;1
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;127
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;127
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;126
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;126
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;125
Processing Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;125
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;124
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;123
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;122
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;121
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;120
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;119
    .      .       .
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;3
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;2
Checking File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;1
Site:     Files:Accept: Dups:Junked:Reject:Total:
=================================================
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;132, 
gallia                    1       0       0      
censores       .     .     .      .     .      . 
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;136, 
carthago       1       0       0        0        
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;128, 
carthago       1       0       0        0        
carthago 2(2.5K)     2     .      .     .      2 
scipio         .     .     .      .     .      . 
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;129, 
causae                   10       0       0      
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;132, 
causae                  166       0       0      
Log File: NEWS_MANAGER_DEV:[LOG]NEWSADD.LOG;139, 
causae                  541       0       0      
causae    3(2.5M)  717     .      .     .    717 
                                                 
          Files:Accept: Dups:Junked:Reject:Total:
TOTAL:   72(2.5M)  720     .      .     .    720 

 %Bad:%TAcpt: I've: Xmit:Bytes:  
===============================  
Lines: 97, badlines: 0           
  0                              
     .      .     .   717  2.5M  
Lines: 108, badlines: 0          
                                 
Lines: 108, badlines: 0          
                                 
     .   0.3%     .     .     .  
     .      .     2     .     .  
Lines: 107, badlines: 0          
  0                              
Lines: 263, badlines: 0          
  0                              
Lines: 638, badlines: 0          
  0                              
     .  99.6%     .     .     .  
                                 
 %Bad:%TAcpt: I've: Xmit:Bytes:  
     .      .     2   717  2.5M

Note in this report that one item was received from site gallia, but since that site sent only one batch, it didn't appear in the brief report shown in the previous example, so the "TOTAL" statistics appeared to be off by one item.

7.4.4 NewsShutDown

NewsShutDown is designed to let you perform arbitrary tasks which require exclusive access to resources (particularly the local database) used by News. In most cases, it will produce faster results than using the older News_Stop logical name to force other News images to exit.

In order for NewsShutDown to function properly, all programs in the News distribution must participate in a common locking scheme, so that everyone is aware of everyone else's presence. From your perspective, this means that any time you run an image from the News distribution, it must have SysLck privilege available to it. You can do this by running the image in a process with SysLck set, or you can install the image with SysLck. If you don't do this, the image will assume you didn't want it to participate in the locking scheme, and will sail happily along. This won't hurt it, but there will be no record of that image's activity, so a utility like NewsShutDown may think it has gained exclusive access to the News resource, when in fact it hasn't. The News images are a little picky here -- they check whether SysLck is already set, but won't attempt to set or reset it themselves.

7.4.4.1 Installation

NewsShutDown.Exe is created in the News_Dist directory by NewsBuild.Com. You may place it in any convenient location (e.g. News_Manager).

7.4.4.2 Command Syntax

NewsShutDown should be invoked via the DCL Run command. It takes no parameters or qualifiers on the command line, but gets the information it needs to function from two logical names: Both of these logical names are translated using the usual order of logical name tables and access modes, so they can (and usually are) defined in the process table in supervisor or user mode just before invoking NewsShutDown.

7.4.4.3 Session Summary

When NewsShutDown is run, it tries to translate the logical name News_Locked_Command, and, if unsuccessful, prints an error message and exits. If it succeeds in obtaining the command, it tries to enqueue an exclusive lock on the News resource. This will trigger blocking ASTs put in place by other News images which are currently running. These ASTs set a flag which the image checks the next time it looks for user input, and, if the flag is set, the image exits with status SS$_DEADLOCK (decimal 3594). News.Exe, NNTP_Xmit, and NNTP_Xfer will exit within 20 minutes in any case. The single threaded NNTP servers will close the connection and exit as soon as they notice that the flag is set, but not before, so they may wait some time if the connection is idle or slow. The multithreaded NNTP servers will close all open connections when they notice that the flag is set, but will remain active and keep the News index files open. These servers must be stopped manually in order to release the index files.

If an error occurs in enqueueing the lock, NewsShutDown exits immediately with the error status as its exit status; otherwise, it will wait News_Locked_Wait_Minutes for the lock to be granted. If the lock request still hasn't been granted, NewsShutDown exits with status SS$_CANCEL (decimal 2096).

Once it has obtained the exclusive lock, NewsShutDown spawns a subprocess, passing it the translation of News_Locked_Command as the command to be executed. When this command completes, the subprocess is deleted, and NewsShutDown exits with the final status of the subprocess as its exit status. If you want more than one command to be executed in the subprocess, place them in a DCL procedure, and invoke that procedure via News_Locked_Command. Remember that NewsShutDown is holding an exclusive lock on the News resource throughout the life of the subprocess, so you cannot execute News images via News_Locked_Command.

7.4.4.4 Examples

The following example uses NewsShutDown to force other News images to exit as soon as possible, and then keeps them from restarting using the News_Stop command. This is one way to perform an orderly shutdown of News activity on a node for an extended time.
$ oldprv = F$SetPrv("SysLck,SysNam")
$ Define/User News_Locked_Command "Define/System/Executive News_Stop 1"
$ Define/User News_Locked_Wait_Minutes 30
$ Run News_Manager:NewsShutDown.Exe
$ sts = $Status
$ If .not.sts Then -
    Write Sys$Error "%NEWSSTOP-F-ERROR, unable to shut down ANU News"
$ oldprv = F$SetPrv(oldprv)
$ Exit sts

For an example which uses NewsShutDown to obtain exclusive access to the News index files in order to optimize them, see the section above titled "Obtaining exclusive access to the News database".


If you run into any difficulties setting up or using ANU News, the best source for advice is the newsgroup news.software.anu-news, or the associated BITNET mailing list ANU_News@UBVM.BitNet. (There is a bidirectional gateway between the list and the newsgroup, so you need only report problems by one of these routes.) When reporting a problem to the newsgroup, please try to include the following information:

This may seem like a lot of information to gather, but in most cases, only some of these items apply, and accurate information will greatly assist others in figuring out what you're seeing.

When you post your problem report, please remember to include a valid return address (if at all possible, place it in a From: or Reply-To: header). This is, of course, particularly important if the problem has disrupted your access to news.
This appendix is based on notes compiled by Rand Hall <rand@merrimack.edu> and posted to the newsgroup news.software.anu-news in 1990. It explains how to use RMS global buffers on your News index files to improve efficiency of lookups by caching the index records in memory.

To do this you need to have exclusive access to the index files News.Groups and News.Items. (See above for explanation of locking the News database.) First, you'll need to analyze the structure of the index files. From the News_Root directory, say

$ Analyze/RMS/FDL/Output=Groups.FDL News.Groups
$ Edit/FDL Groups.FDL
  . . .
        Main Editor Function            (Keyword)[Help] : Invoke
  . . .
        Editing Script Title            (Keyword)[-]    : Optimize
 
        An Input Analysis File is necessary for Optimizing Keys.
 
        Analysis File file-spec (1-126 chars)[null]
        : Groups.FDL
  . . .
        Graph type to display           (Keyword)[Line] : Line

(Press <Return> to accept defaults.)
        Which File Parameter    (Mnemonic)[refresh]     : FD
        Text for FDL Title Section      (1-126 chars)[null]
        : 
<Return>
In the display, note the maximum values for Number of Buckets in Index, Suggested Bucket Size, and Pages Required to Cache Index. Press <Return> (or select FD at the graph) through all of the KEY 1 stuff (technical, eh?) until you get back to the main menu. Select Exit.

Buckets in Index should be Pages Required to Cache Index divided by Bucket Size. Edit Groups.FDL and add the line GLOBAL_BUFFER_COUNT n in the FILE section of the FDL file, where n is the value of Pages Required to Cache Index.

If you want to be more productive, look at the compression stats in the original (before Edit/FDL was invoked) Groups.FDL, in the Analysis of Key sections. If a key or record is compressed (look in KEY description at top of file) and the stats are lousy (<50%) turn compression off by editing the new Groups.FDL.

The parameters in the FDL file are now optimized. To apply it to News.Groups, obtain exclusive access to teh News database, and say
$ Convert/NoFast/FDL=Groups.FDL News.Groups News.Groups

Now (huff, pant) do the same for News.Items (this takes a looooong time).

Each file you slap global buffers on needs one global section, so you may need to increase the SYSGEN parameter GBLSECTIONS. You may also need to boost the SYSGEN paramters GBLPAGES, GBLPAGFIL, and RMS_GBLBUFQUO. The following DCL procedure will show you what you need for each file:

$ !  RMSGLOB_SYSGEN.COM
$ !+
$ !  Author: Rand P. Hall  <rand@merrimack.edu>
$ !
$ !  Command procedure to estimate RMS global buffer SYSGEN
$ !  units needed for one indexed file.  This command procedure
$ !  can be adapted to accumulate these units across a number
$ !  of indexed files.
$ !
$ !  CAUTION:  This sample command procedure has been tested
$ !  using VMS 5.4-3.  However, we cannot guarantee its
$ !  effectiveness because of the possibility of error in
$ !  transmitting or implementing it.  It is meant to be used
$ !  as a template for writing your own command procedure, and
$ !  it may require modification for use on your system.
$ !
$ !-
$ ON WARNING THEN GOTO DONE
$ IF P1 .EQS. "" THEN $INQUIRE P1 "FILE SPEC "
$ P1 = F$SEARCH (P1, 1)                  ! Returns full filespec
$ gbc = F$FILE_ATTRIBUTES (P1,"GBC")     ! GBC available as of
$ IF gbc .EQ. 0 THEN GOTO NOGBC          ! VMS V5.2
$ bks = F$FILE_ATTRIBUTES (P1,"BKS")
$ RMS_GBLBUFQUO = gbc
$ GBLSECTIONS   = 1
$ k             = 64 + (gbc * 48)        ! GBH and GBD descriptor bytes
$ GBLPAGFIL     = ((gbc * bks * 512) + k + 1023)/512
$ GBLPAGES      = GBLPAGFIL + 2          ! plus 2 stopper pages
$ WRITE SYS$OUTPUT ""
$ WRITE SYS$OUTPUT "SYSGEN values needed by ",P1
$ WRITE SYS$OUTPUT ""
$ WRITE SYS$OUTPUT "  RMS_GBLBUFQUO = ",RMS_GBLBUFQUO
$ WRITE SYS$OUTPUT "  GBLSECTIONS   = ",GBLSECTIONS
$ WRITE SYS$OUTPUT "  GBLPAGES      = ",GBLPAGES
$ WRITE SYS$OUTPUT "  GBLPAGFIL     = ",GBLPAGFIL
$ GOTO DONE
$ NOGBC:
$   WRITE SYS$OUTPUT ""
$   WRITE SYS$OUTPUT "Following file did not have global buffers set: "
$   WRITE SYS$OUTPUT "     ",P1
$ DONE:
$   EXIT

previous: 7.3.3.4 Example
Table of Contents