============================================================ File 0XECSUM.TXT: "EXECUTIVE SUMMARY" Heureka AntiVirus/AntiMalware Product Test "2001-07" antiVirus Test Center (VTC), University of Hamburg ============================================================ [Formatted with non-proportional font (Courier), 72 columns] *************************************************************** Content of this file: *************************************************************** 1. Foreword of the Editors 2. VTC Testbeds used in VTC Heureka test "2001-07" 3. Products participating ins Heureka Test "2001-07" Table ES2: List of AV products in VTC Heureka test 4. Results of on-demand persistent detection under Windows-NT 4.1 Development of Zoo macro virus detection rates Results "Heureka.A" 4.2 Development of ITW macro virus detection rates Results "Heureka.B" 4.3 Development of zoo script virus detection rates Results "Heureka.C" 4.4 Development of macro malware detection rates Results "Heureka.D" 4.5 Grading WNT products according to "Heureka" results ************************************************************ "Perfect" WNT persistent AV product: =NONE= ************************************************************ "Excellent" WNT AV products: 1st place: SCN ( 6 points) 2nd place: FSE ( 4 points) 3rd place: CMD, FPR, FPW, RAV ( 3 points) 7th place: AVP, INO, PAV ( 1 point ) ************************************************************** No "perfect", "excellent" or "very good" persistent AM product ************************************************************** 5. Availability of full test results 6. Copyright, License, and Disclaimer ***************************************************************** 1. Foreword of the Editors: =========================== When VTC test "2001-04" was discussed, the question was raised whether related products would be able to detect also viruses found signifi- cantly after their engine and signature date. Methodologically, AntiVirus products are also able to detect "new" viruses when they resemble to a certain degree to a yet known virus family: in this case, variants of a given family are usually detected with "generic" methods. An AV product may also analyse, usually in a specific - often called "heuristic" - mode, whether there are symptoms which may relate to some viral mechanism. In the first case, AV products detect such viruses in their "normal" modes, esp. including those which give "optimum detection" (as usually adressed in VTC tests). But in the latter case, a special switch must be set to enable the special heuristic mode. In order to start from a known basis of detection, we decided to use the same switches as in VTC test "2001-04". Consequently, though this test adresses detection of viruses hitherto unknown to the AV products, it is NOT a "heuristic test". We therefore named it a "Heureka" test (from Greek: "I have found"). As the next VTC test ("2001-09") was yet under preparation, we also decided: - this Heureka test adresses only products under W-NT, - the test is only done for macro and script viruses. Two sets of testbeds were prepared: - as the reference testbed was frozen on October 31, 2000, testbed ".011" contained all zoo macro and script viruses which were first reported between November 1, 2000 and January 31, 2001 - testbed ".014" contained all zoo macro and script viruses first reported between Febuary 1 and April 30, 2001. - for ITW macro viruses, those viruses were selected from the related Wildlists (January and April) which were then first reported to be In-The-Wild. Evidently, this included viruses which were probably in the zoo used in test "2001-04". After establishment of the testbeds, the test crew worked very reliable and produced results within 2 weeks. Most work in VTC tests rests on the shoulders of our test crew, and the editors wish to thank them all for their devotion and hard work. 2. VTC Testbeds used in VTC test "2001-07": =========================================== The sizes of the different VTC testbeds is given in the following table (for detailed indices of VTC testbeds, see file "a3testbed.txt") Table ES1: Content of VTC test Heureka databases: ============================================================================ "Full Zoo.011": 267 newly reported macro viruses in 1219 infected objects 85 newly reported script viruses in 152 infected objects 23 newly reported macro malware in 49 different objects "ITW Zoo.011": 22 newly reported macro Viruses in 67 infected objects ---------------------------------------------------------------------------- "Full Zoo.014": 278 newly reported macro viruses in 1016 infected objects 121 newly reported script viruses in 234 infected objects 41 newly reported macro malware in 53 different objects "ITW Zoo.014": 11 newly reported macro Viruses in 37 infected objects ============================================================================ 3. Products participating in Heureka Test "2001-07": ==================================================== As general reference, compare results (W-NT section) of VTC test "2001-04". For test "2001-07", the following *** 15 *** AntiVirus products (adressed in subsequent tables by a 3-letter abbreviation) under Windows-NT were tested: Table ES2: List of AV products in Heureka test ============================================== ---------------------------------------------------------- Products submitted for aVTC Heureka test under Windows-NT: ---------------------------------------------------------- AV3 v: 3.0.304.0 sig: Dec.04,2000 AVG 6 v: 6.220 sig: Dec.11,2000 AVK 10 v: 10,0,0,0 sig: Dec.07,2000 AVP Platinum v: 3.5.311.0 sig: Dec.07,2000 CMD v: 4.60 sig: Dec.11,2000 FPR v: 3.08b sig: Dec.11,2000 FPW v: 3.08b sig: Dec.11,2000 FSE v: 5.21 sig: Dec.01,2000 INO v: 4.53 Enterprise Ed. sig: Dec.11,2000 NVC v: 4.86 sig: Dec.01,2000 PAV v: 3.0.132.4 sig: Dec.07,2000 PER v: 6.60 sig: Nov.30,2000 RAV v: 8.1.001 sig: Dec.11,2000 SCN v: 4.12.0 sig: Dec.06,2000 VSP v: 12.02.2 sig: Dec.11,2000 -------------------------------------------------------- Remark: NAV and Pro which participated in VTC test "2001-04" were withdrawn from this test, as new versions were announced. For details of AV products including options used to determine optimum detection rates: see A3SCNLS.TXT. Detailed results including precision and reliability of virus and malware identification are presented in 6gwnt.txt, and an analysis (evaluation) of results is presented in 7evalwnt.txt. 4. Results of on-demand persistent detection under Windows-NT: ============================================================== In the following section, the resuls are analysed in some detail. Much more details, including detection of infected objects as well as precision and reliability of detection can be found in 6gwnt.txt. 4.1 Development of Zoo macro virus detection rates: =================================================== The following table summarizes Zoo macro virus detection results. Total known I New viruses detected: Testbed "2000-0A" I "2001-01" "2001-04" ------------------------+------------------------------ Viruses I New Viruses New Viruses Scanner detected I detected detected ------------------------+------------------------------ Testbed 6233 100.0 I 267 100.0 278 100.0 ------------------------+------------------------------ AV3 5966 95.7 I 139 52.1 156 56.1 AVG 6128 98.3 I 226 84.6 216 77.7 AVK 6230 100.0 I 195 73.0 147 52.9 AVP 6230 100.0 I 195 73.0 147 52.9 CMD 6233 100.0 I 225 84.3 214 77.0 FPR 6233 100.0 I 225 84.3 214 77.0 FPW 6233 100.0 I 225 84.3 214 77.0 FSE 6233 100.0 I 246 92.1 247 88.8 INO 6215 99.7 I 250 93.6 256 92.1 NVC 6219 99.8 I 146 54.7 78 28.1 PAV 6230 100.0 I 196 73.4 151 54.3 PER 4252 68.2 I 199 74.5 221 79.5 RAV 6208 99.6 I 223 83.5 223 80.2 SCN 6233 100.0 I 261 97.8 270 97.1 VSP 1 0.0 I 0 0.0 0 0.0 ------------------------+------------------------------ Mean 90.8% I 73.7% 66.0% ------------------------+------------------------------ Analysis: the ability of AV products to macro viruses which were first reported after the related engine and signature were shipped is evidently very different: - with one exception (PER), AV products loose continously detection quality with time - from mean detection rate in reference test (90.8%), loss of detection quality is * in first 3 months: -17.1% * in next 3 months: -24.8% In order to determine the quality (starting with high detection rate in reference test "2004-04"), we define the "loss vector": loss vector = (detection rate in reference test, loss in months 1-3, loss in months 4-6) In order to classify product behaviour, we grade products according to loss in detection quality. When considering only products with losses up to 20% per 3-month period, the following products behaved best in "Heureka" test: ------------------------------------ detection rate loss in loss in AV product in ref-test month 1-3 month 4-6 ------------------------------------ SCN 100.0% - 2.2% - 2.9% INO 99.7% - 6.1% - 7.6% FSE 100.0% - 7.9% -11.2% AVG 98.3% -13.7% -20.6% CMD 100.0% -15.7% -23.0% FPR 100.0% -15.7% -23.0% FPW 100.0% -15.7% -23.0% ------------------------------------ ************************************************************* Result "Heureka-A": concerning new zoo macro viruses, the following 2 products miss less than 10% over each 3-month period ----------------------------------------------------- SCN 1st period: - 2.2% 2nd period: - 2.9% ----------------------------------------------------- INO 1st period: - 6.1% 2nd period: - 7.6% ----------------------------------------------------- FSE 1st period: - 7.9% 2nd period: -11.2% ************************************************************** 4.2) Development of ITW macro virus detection rates: ==================================================== The following table summarizes In-The-Wild macro virus detection results. Total known I New viruses found: Testbed "2000-0A" I "2001-01" "2001-04" ------------------------+------------------------------ Viruses I New Viruses New Viruses Scanner detected I detected detected ------------------------+------------------------------ Testbed 147 100.0 I 22 100.0 11 100.0 ------------------------+------------------------------ AV3 147 100.0 I 21 95.5 7 63.6 AVG 147 100.0 I 22 100.0 9 81.8 AVK 147 100.0 I 22 100.0 9 81.8 AVP 147 100.0 I 22 100.0 9 81.8 CMD 147 100.0 I 22 100.0 11 100.0 FPR 147 100.0 I 22 100.0 11 100.0 FPW 147 100.0 I 22 100.0 11 100.0 FSE 147 100.0 I 22 100.0 11 100.0 INO 147 100.0 I 22 100.0 10 90.9 NVC 147 100.0 I 22 100.0 8 72.7 PAV 147 100.0 I 22 100.0 9 81.8 PER 114 77.6 I 11 50.0 10 90.9 RAV 147 100.0 I 22 100.0 11 100.0 SCN 147 100.0 I 22 100.0 11 100.0 VSP 0 0.0 I 0 0.0 0 0.0 ------------------------+------------------------------ Mean 91.8% I 89.7% 83.0% ------------------------+------------------------------ Concerning In-The-Wild macro viruses, the situation is - understandably - much better than with zoo viruses, as most ITW viruses are essentially new variants of virus families found some (if not: long) ago. Consequently, generic detection of new viruses in such - often fastly growing - strains is available in most scanners. Indeed, several products detect ALL new variants for both periods. In applying the same grading criteria as for zoo viruses, the following products behaved best in "Heureka" test: ------------------------------------ detection rate loss in loss in AV product in ref-test month 1-3 month 4-6 ------------------------------------ CMD 100.0% - 0.0% - 0.0% FPR 100.0% - 0.0% - 0.0% FPW 100.0% - 0.0% - 0.0% FSE 100.0% - 0.0% - 0.0% RAV 100.0% - 0.0% - 0.0% SCN 100.0% - 0.0% - 0.0% ------------------------------------- INO 100.0% - 0.0% - 9.1% AVG 100.0% - 0.0% -18.2% AVK 100.0% - 0.0% -18.2% AVP 100.0% - 0.0% -18.2% PAV 100.0% - 0.0% -18.2% NVC 100.0% - 0.0% -27.3% ------------------------------------- ***************************************************************** Result "Heureka B": The following *6* products detect In-The-Wild macro viruses with highest persistency over both 3-month periods: CMD, FPR, FPW, FSE, RAV, SCN ***************************************************************** 4.3) Development of zoo script virus detection rates: ===================================================== The following table summarizes Zoo script detection results. Total known I New viruses found: Testbed "2000-0A" I "2001-01" "2001-04" ------------------------+------------------------------ Viruses I New Viruses New Viruses Scanner detected I detected detected ------------------------+------------------------------ Testbed 477 100.0 I 85 100.0 121 100.0 ------------------------+------------------------------ AV3 139 29.1 I 21 24.7 14 11.6 AVG 276 57.9 I 44 51.8 38 31.4 AVK 476 99.8 I 70 82.4 91 75.2 AVP 476 99.8 I 77 90.6 95 78.5 CMD 462 96.9 I 32 37.6 33 27.3 FPR 463 97.1 I 32 37.6 33 27.3 FPW 462 96.9 I 32 37.6 33 27.3 FSE 477 100.0 I 74 87.1 95 78.5 INO 442 92.7 I 58 68.2 62 51.2 NVC 422 88.5 I 51 60.0 57 47.1 PAV 476 99.8 I 78 91.8 95 78.5 PER 105 22.0 I 18 21.2 12 9.9 RAV 405 84.9 I 58 68.2 75 62.0 SCN 477 100.0 I 78 91.8 88 72.7 VSP 407 85.3 I 55 64.7 73 60.3 ------------------------+------------------------------ Mean 83.4% I 61.0% 49.3% ------------------------+------------------------------ In comparison with zoo macro viruses, the situation concerning script (=VBS, JavaScript etc) viruses is much less favourable, as the following tabel demonstrates (again, only AV products with detection rate loss of less than 20% are listed, with minimum detection rate of 90% in the reference test): ------------------------------------ detection rate loss in loss in AV product in ref-test month 1-3 month 4-6 ------------------------------------ PAV 99.8% - 8.0% -21.3% SCN 100.0% - 8.2% -27.3% AVP 99.8% - 9.2% -21.3% FSE 100.0% -12.9% -21.5% AVK 99.8% -17.4% -24.6% ------------------------------------ ***************************************************************** Result "Heureka C": Zoo script virus detection is significantly less well developped. With best products loosing more than 8% in both 3-month periods, there is strong need for improvement of persistent detection methods esp. as this category adresses many mass-emailing viruses! ***************************************************************** 4.4) Development of macro malware detection rates: ================================================== The following table summarizes macro malware detection results. Total known I New viruses found: Testbed "2000-0A" I "2001-01" "2001-04" ------------------------+------------------------------ Viruses I New Viruses New Viruses Scanner detected I detected detected ------------------------+------------------------------ Testbed 403 100.0 I 23 100.0 41 100.0 ---------------------------------------------------------- AV3 329 81.6 I 6 26.1 13 31.7 AVG 323 80.1 I 13 56.5 23 56.1 AVK 400 99.3 I 16 69.6 25 61.0 AVP 400 99.3 I 16 69.6 25 61.0 CMD 402 99.8 I 16 69.6 27 65.9 FPR 402 99.8 I 16 69.6 27 65.9 FPW 402 99.8 I 16 69.6 27 65.9 FSE 403 100.0 I 19 82.6 28 68.3 INO 378 93.8 I 18 78.3 29 70.7 NVC 399 99.0 I 10 43.5 21 51.2 PAV 400 99.3 I 16 69.6 25 61.0 PER 234 58.1 I 15 65.2 25 61.0 RAV 391 97.0 I 18 78.3 24 58.5 SCN 403 100.0 I 18 78.3 28 68.3 VSP 1 0.2 I 0 0.0 0 0.0 ------------------------+------------------------------ Mean 87.1% I 61.8% 56.4% ------------------------+------------------------------ As non-replicative malware (esp. trojan horses) has recently been strongly reported from Internet newsgroups and on websites, it is essential to find at least new instantiations of known malware. The following table lists those products with detection rates of at least 80% in the reference test, and which do not loose more than 30% detection rates during the 1st 3-month period: ------------------------------------ detection rate loss in loss in AV product in ref-test month 1-3 month 4-6 ------------------------------------ FSE 100.0% -17.4% -31.7% SCN 100.0% -21.7% -31.7% AVK 99.3% -29.7% -38.3% AVP 99.3% -29.7% -38.3% PAV 99.3% -29.7% -38.3% RAV 97.0% -18.7% -38.5% INO 93.8% -15.5% -23.1% ------------------------------------- ****************************************************************** Result "Heureka C": The persistency of non-replicative malware detection needs further improvements. Even those 5 AV products which detect macro malware "perfectly" (100.0%) or "excellently" (>=99.0%) loose detection quality even in the 1st 3-month period. ****************************************************************** 4.5) Grading WNT products according to "Heureka" results: ========================================================= In order to grade the "persistent quality" of AV products to detect new viruses in consecutive 3-months periods, we define the following grid (applied to first 3-month period): Category I: "perfect persistency": loss per 3-month period = 0.0% Category II: "excellent persistency": 0.0% <= loss per 3-month period <= 5.0% Category III: "very good persistency": 5.0% < loss per 3-month period <= 10.0% Category IV: "good persistency": 10.0% < loss per 3-month period <= 15.0% Category V: "acceptable persistency": 15.0% < loss per 3-month period <= 20.0% Except for In-The-Wild macro viruses, NO product reaches "perfect consistent" detection rates: ******************************************************* Overall, NO AV product detects macro and script malware (including viruses) with perfect persistency. ******************************************************* The following table lists those AV products which fall into categories I (perfect), II (excellent) and III (very good): Test category: Perfect Excellent Very Good --------------------------------------------------------------- WNT zoo macro virus: --- SCN INO,FSE WNT ITW macro virus: CMD,FPR,FPW, FSE,RAV,SCN --- --- WNT zoo script virus: --- --- PAV,SCN,AVP ---------------------------------------------------------------- WNT macro malware: --- --- --- ---------------------------------------------------------------- In order to support the race for more customer protection, we evaluate the order of performance in test with a simple algorithm, by counting the majority of places (weighing "perfect" with 3 points, "excellent" with 2 points and "very good" with 1 point), for 1st places: ************************************************************ "Perfect" WNT persistent AV product: =NONE= ************************************************************ "Excellent" WNT AV products: 1st place: SCN ( 6 points) 2nd place: FSE ( 4 points) 3rd place: CMD, FPR, FPW, RAV ( 3 points) 7th place: AVP, INO, PAV ( 1 point ) ************************************************************** No "perfect", "excellent" or "very good" persistent AM product ************************************************************** 5. Availability of full test results: ====================================== Much more information about this test, its methods and viral databases, as well as detailed test results are available for anonymous FTP downloading from VTCs HomePage (VTC is part of Working Group AGN): ftp://agn-www.informatik.uni-hamburg.de/vtc Any comment and critical remark which helps VTC in learning to improve our test methods will be warmly welcomed. The next comparative test will evaluate macro (VBA/VBA5) and script virus detection, and this test is planned for June to September 2001, with viral databases frozen on April 30, 2001. Any AV producer wishing to participate in forthcoming test (and conforming with VTC test rules and VTC Code of Conduct) is invited to submit related products. On behalf of the VTC Test Crew: Dr. Klaus Brunnstein & B.Sc. Jan Seedorf (Hamburg: July 17, 2001) 6. Copyright, License, and Disclaimer: ======================================= This publication is (C) Copyright 2001 by Klaus Brunnstein and the Virus Test Center (VTC) at University of Hamburg, Germany. Permission (Copy-Left) is granted to everybody to distribute copies of this information in electronic form, provided that this is done for free, that contents of the information are not changed in any way, and that origin of this information is explicitly mentioned. It is esp. permitted to store and distribute this set of text files at university or other public mirror sites where security/safety related information is stored for unrestricted public access for free. Any other use, esp. including distribution of these text files on CD-ROMs or any publication as a whole or in parts, are ONLY permitted after contact with the supervisor, Prof. Dr. Klaus Brunnstein or authorized members of Virus Test Center at Hamburg University, and this agreement must be in explicit writing, prior to any publication. No responsibility is assumed by the author(s) for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Prof. Dr. Klaus Brunnstein University of Hamburg, Germany (July 17, 2001)