SWEC is a program that automates testing of dynamic Web sites. It parses each HTML file it finds for links, and if those links are within the site specified, it will check that page as well. In addition to parsing and locating links, it will also parse the pages looking for known errors and report those. It will report if a page cannot be read (by either returning a 404, 500, or similar).
|Tags||Internet Web Dynamic Content Site Management Link Checking Software Development Testing Text Processing Markup HTML/XHTML|
|Operating Systems||POSIX BSD Linux|
Release Notes: A new version of the test definition format (SDFv2) is now used, which improves both speed and flexibility of error checks. The old format is deprecated, but will continue to be supported until SWEC 0.6. Seed/baseurl parsing was made smarter. The --checksub parameter was added, which makes SWEC descend into subdomains. Various bugfixes, code cleanups, and minor changes were made.
Release Notes: SWEC now returns nonzero if a test fails. The dependency on HTML::LinkExtractor was removed (it is now optional). The user-agent string and the final summary were cleaned up. The --nohead option was added, which tells SWEC to skip performing HEAD requests and go straight to GET. The --keepgoing option was added, which tells SWEC to parse a document for URLs even if it contained errors. Various other bugfixes and minor enhancements were made.
Release Notes: Minor fixes and enhancements to various tests. Now retries if the server resets the connection before the test is done. Some of the output is easier to read. This release adds --lwphead and --lwpget, which are equivalent to the LWP HEAD and GET commands but with added support for cookies.
Release Notes: Many new tests were added. Internal error codes are generated for HTTP errors so that you may exclude those tests. Better assumptions are made when you only supply a single URL on the commandline. Binary files are now skipped based upon their returned HTTP content type, and not just their extension. A HEAD request is now performed before doing the GET request, so that skipped files are not downloaded needlessly. URL seeds will now be checked in the order they appear on the commandline.