目录

  • 1 Chapter 1 Introduction
    • 1.1 1.0 Course Contents
    • 1.2 1.1 Basic Conception
      • 1.2.1 Lecture 1
      • 1.2.2 Lecture 2
    • 1.3 1.2 Compiler Structure
      • 1.3.1 Lecture 1
      • 1.3.2 Lecture 2
      • 1.3.3 Lecture 3
    • 1.4 1.3 The Technique of Compiler Developing
  • 2 Chapter 2 Conspectus of Formal Language
    • 2.1 2.1 Alphabets and Strings
      • 2.1.1 Lecture 1
      • 2.1.2 Lecture 2
    • 2.2 2.2 Grammars and its Categories
      • 2.2.1 Lecture 1
      • 2.2.2 Lecture 2
    • 2.3 2.3 Languages and Parse Tree
      • 2.3.1 Lecture 1
      • 2.3.2 Lecture 2
    • 2.4 2.4 Notes of Formal Language
    • 2.5 2.5 Basic Parsing Techniques
      • 2.5.1 Lecture 1
      • 2.5.2 Lecture 2
  • 3 Chapter 3 Finite Automata
    • 3.1 3.1 Formal Definition of FA
      • 3.1.1 Lecture 1
      • 3.1.2 Lecture 2
    • 3.2 3.2 Transition from NDFA to DFA
      • 3.2.1 Lecture 1
      • 3.2.2 Lecture 2
      • 3.2.3 Lecture 3
    • 3.3 3.3 RG and FA
    • 3.4 3.4 Regular Expression & Regular Set
      • 3.4.1 Lecture 1
      • 3.4.2 Lecture 2
  • 4 Chapter 4 Scanner(Lexical Analyzer)
    • 4.1 4.1 Lexical Analyzer and Tokens
      • 4.1.1 Lecture 1
      • 4.1.2 Lecture 2
    • 4.2 4.2 Step for developing a lexical analyzer
    • 4.3 4.3  Dealing with Identifier
    • 4.4 4.4  Using Regular Expressions
    • 4.5 4.5 Using Flex
      • 4.5.1 Lecture 1
      • 4.5.2 Lecture 2
  • 5 Chapter 5 Top-Down Parsing
    • 5.1 5.0 Push Down Automata (PDA, Added)
      • 5.1.1 Lecture 1
      • 5.1.2 Lecture 2
      • 5.1.3 Lecture 3
    • 5.2 5.1 Elimination Left-Recursion
    • 5.3 5.2 LL(k) Grammar
      • 5.3.1 Lecture 1
      • 5.3.2 Lecture 2
    • 5.4 5.3 Deterministic LL(1) Analyzer Construction
    • 5.5 5.4 Recursive-descent (Non-backtracking) parsing
    • 5.6 5.5 复习与结课
      • 5.6.1 Lecture1 结课感言
      • 5.6.2 Lecture 2 关于复习
      • 5.6.3 Lecture 3 习题讲解
  • 6 Chapter 6 bottom-up Parsing and precedence analyzer
    • 6.1 6.1 Bottom-Up Parsing
    • 6.2 6.2 Phrase, Simple Phrase and Handle
    • 6.3 6.3 A Shift-Reduce Parser
    • 6.4 6.4 Some Relations on Grammar
    • 6.5 6.5 Simple Precedence Parsing
    • 6.6 6.6 Operator-Precedence Parsing
      • 6.6.1 Lecture 1
      • 6.6.2 Lecture 2
      • 6.6.3 Lecture 3
    • 6.7 6.7 Precedence Functions and Construction
  • 7 Chapter 7  LR Parsing
    • 7.1 7.1 LR Parsers
      • 7.1.1 Lecture 1
      • 7.1.2 Lecture 2
    • 7.2 7.2 Building a LR(0) parse table
      • 7.2.1 Lecture 1
      • 7.2.2 Lecture 2
    • 7.3 7.3 SLR Parse Table Construction
    • 7.4 7.4 Constructing Canonical LR(1) Parsing Tables
    • 7.5 7.5 LALR Parsing Tables Construction
    • 7.6 7.6 Using Ambiguous Grammars
    • 7.7 7.7 Yacc/Bison Overview
  • 8 Chapter 8 Syntax-Directed Translation
    • 8.1 8.1 Syntax-Directed Translation
      • 8.1.1 Lecture1
      • 8.1.2 Lecture 2
      • 8.1.3 Lecture 3
    • 8.2 8.2 Abstract Syntax Tree
    • 8.3 8.3 Intermediate Representation
      • 8.3.1 Lecture 1
      • 8.3.2 Lecture 2
  • 9 Chapter 9 Run-Time Environment
    • 9.1 9.1 Data Area & Attribute
    • 9.2 Section 9.2~9.4 & Section 9.8~9.9
    • 9.3 9.5 Parameter Passing
    • 9.4 9.6 Stack Allocation
    • 9.5 9.7 Heap allocation
  • 10 Chapter 10 Symbol Tables
    • 10.1 10.1 A symbol Table Class
    • 10.2 10.2 Basic Implementation Techniques
    • 10.3 10.3 Block-structured Symbol Table
    • 10.4 10.4 Implicit Declaration
    • 10.5 10.5 Overloading
  • 11 Chapter 11 Code Optimization
    • 11.1 11.1 Control Flow Graph
    • 11.2 11.2 Redundancies
    • 11.3 11.3 Loop Optimizations
    • 11.4 11.4 Instruction Dispatch
      • 11.4.1 Lecture 1
      • 11.4.2 Lecture 2
  • 12 Chapter 12 Code Generation
    • 12.1 12.1 Code generation issues
    • 12.2 12.2 Simple Stack Machine
    • 12.3 12.3 Register Machine
    • 12.4 12.4 A Simple Code Generator
  • 13 13 Extended Reading扩展阅读1 斯坦福大学公开课
    • 13.1 Lecture 1
    • 13.2 Lecture 2
    • 13.3 Lecture 3
    • 13.4 Lecture 4
    • 13.5 Lecture 5
    • 13.6 Lecture 6
    • 13.7 Lecture 7
    • 13.8 Lecture 8
    • 13.9 Lecture 9
    • 13.10 Lecture 10
    • 13.11 Lecture 11
    • 13.12 More sources
  • 14 14 Extended Reading 2 扩展阅读2 illinois.edu lectures
    • 14.1 Lecture 1 Overview
    • 14.2 Lecture 2 Strings, Languages, DFAs
    • 14.3 Lecture 3 More on DFAs
    • 14.4 Lecture 4 Regular Expressions and Product Construction
    • 14.5 Lecture 5 Nondeterministic Automata
    • 14.6 Lecture 6 Closure properties
    • 14.7 Lecture 7 NFAs are equivalent to DFAs
    • 14.8 Lecture 8 From DFAs/NFAs to Regular Expressions
    • 14.9 Lecture 9 Proving non-regularity
    • 14.10 Lecture 10 DFA minimization
    • 14.11 Lecture 11 Context-free grammars
    • 14.12 Lecture 12 Cleaning up CFGs and Chomsky Normal form
    • 14.13 Lecture 13 Even More on Context-Free Grammars
    • 14.14 Lecture 14 Repetition in context free languages
    • 14.15 Lecture 15 CYK Parsing Algorithm
    • 14.16 Lecture 16 Recursive automatas
    • 14.17 Lecture 17 Computability and Turing Machines
    • 14.18 Lecture 18 More on Turing Machines
    • 14.19 Lecture 19 Encoding problems and decidability
    • 14.20 Lecture 20 More decidable problems, and simulating TM and “real” computers
    • 14.21 Lecture 21 Undecidability, halting and diagonalization
    • 14.22 Lecture 22 Reductions
    • 14.23 Lecture 23 Rice Theorem and Turing machine behavior properties
    • 14.24 Lecture 24 Dovetailing and non-deterministic Turing machines
    • 14.25 Lecture 25 Linear Bounded Automata and Undecidability for CFGs
  • 15 15 Extended Reading3 扩展阅读3 Extended  Reference Books
    • 15.1 15.1 English Text Book
    • 15.2 15.2 编译原理(何炎祥,伍春香,王汉飞 2010.04)
    • 15.3 15.3 编译原理(陈光建主编;贾金玲,黎远松,罗玉梅,万新副主编 2013.10)
    • 15.4 15.4 编译原理((美)Alfred V. Aho等著;李建中,姜守旭译 2003.08)
    • 15.5 15.5 编译原理学习与实践指导(金登男主编 2013.11)
    • 15.6 15.6 编译原理及编译程序构造 第2版(薛联凤,秦振松编著 2013.02)
    • 15.7 15.7 编译原理学习指导(莫礼平编 2012.01)
    • 15.8 15.8 JavaScript动态网页开发案例教程
  • 16 16 中文版课件(pdf)辅助学习
    • 16.1 第1章 引论
    • 16.2 第2章 形式语言概论
    • 16.3 第3章 有穷自动机
    • 16.4 第4章 词法分析
    • 16.5 第5章 自上而下分析
    • 16.6 第6章 优先分析方法
    • 16.7 第7章 自下而上的LR(k)分析方法
    • 16.8 第8章 语法制导翻译法
    • 16.9 第9章 运行时的存储组织与管理
    • 16.10 第10章 符号表的组织与查找
    • 16.11 第11章 优化
    • 16.12 第12章 代码生成
  • 17 17 Extended Reading4 扩展阅读4 Static Single Assignment
    • 17.1 17.1 SSA-based Compiler Design
    • 17.2 17.2 A Simple, Fast Dominance Algorithm (Rice Computer Science TR-06-33870)
    • 17.3 17.3 The Development of Static Single Assignment Form(KennethZadeck-Presentation on the History of SSA at the SSA'09 Seminar, Autrans, France, April 2009)
    • 17.4 17.4 SPIR-V Specification(John Kessenich, Google and Boaz Ouriel, Intel Version 1.00, Revision 12 January 16, 2018)
    • 17.5 17.5 Efficiently Computing Static Single Assignment Form and the Control Dependence Graph
    • 17.6 17.6 Global Value Numbers and Redundant Computations
  • 18 18 Extended Reading4 扩展阅读5 Computer Science
    • 18.1 1 实地址模式和保护模式的理解
    • 18.2 2 实模式和保护模式
    • 18.3 3 实模式和保护模式区别及寻址方式
    • 18.4 计算机专业术语
    • 18.5 Bit Math in c Language
    • 18.6 Auto-generating subtitles for any video file
    • 18.7 Autosub
    • 18.8 C语言中的内联函数(inline)与宏定义(#define)
  • 19 19 相关学习
    • 19.1 龙书、鲸书和虎书
    • 19.2 Complexity
    • 19.3 MPC Complexity
    • 19.4 NP-completeness
    • 19.5 Computational complexity theory
  • 20 20 全球战疫-武汉战疫延伸与扩展
    • 20.1 Extraordinary G20 Leaders’ Summit Statement on COVID-19
    • 20.2 Experts urge proactive measures to fight virus
    • 20.3 covid-19病毒下贫穷国家
    • 20.4 正确理解病亡率、压平曲线、疫情高峰术语
    • 20.5 为什么全球经济可能陷入长期衰退
    • 20.6 为何新冠病毒检测会出现“假阴性”
    • 20.7 在纽约,几乎每个人身边都有人感染病毒
    • 20.8 An Address by Her Majesty The Queen
    • 20.9 Boris Johnson admitted to hospital over virus sympto
    • 20.10 Edinburgh festivals cancelled due to coronavirus
    • 20.11 US set to recommend wearing of masks
    • 20.12 Boris Johnson in self-isolation after catching coronavirus
    • 20.13 Covid-19:The porous borders where the virus cannot be controlled
    • 20.14 当欧洲人开始戴上口罩
    • 20.15 Lockdown and ‘Intimate Terrorism’
    • 20.16 Us Election 2020: Bernie Sanders Suspends Presidential Campaign
    • 20.17 The aircraft carrier being infected with the coronavirus
    • 20.18 Spent to the W.H.O.
    • 20.19 Unemployment
    • 20.20 The beat of a heart the glimmer of a soul
    • 20.21 Coronavirus pandemic: EU agrees €500bn rescue package
    • 20.22 the world after coronavirus冠状病毒之后的世界
  • 21 21 课程思政方案
    • 21.1 21.1 课程思政
    • 21.2 21.2 实施方案
NP-completeness

NP-completeness

Frpm: https://en.wikipedia.org/wiki/NP-completeness

From Wikipedia, the free encyclopedia

  (Redirected from NP-complete)

Jump to navigationJump to search


Euler diagram for PNP, NP-complete, and NP-hard set of problems. The left side is valid under the assumption that P≠NP, while the right side is valid under the assumption that P=NP (except that the empty language and its complement are never NP-complete, and in general, not every problem in P or NP is NP-complete)

In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time),[1] such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is called NP, an abbreviation for "nondeterministic polynomial time". A problem is said to be NP-hard if everything in NP can be transformed in polynomial time into it, and a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If any NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted by NP-C or NPC.

Although a solution to an NP-complete problem can be verified "quickly", there is no known way to find a solution quickly. That is, the time required to solve the problem using any currently known algorithm increases rapidly as the size of the problem grows. As a consequence, determining whether it is possible to solve these problems quickly, called the P versus NP problem, is one of the fundamental unsolved problems in computer science today.

While a method for computing the solutions to NP-complete problems quickly remains undiscovered, computer scientists and programmers still frequently encounter NP-complete problems. NP-complete problems are often addressed by using heuristic methods and approximation algorithms.

Contents

Overview[edit]

NP-complete problems are in NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a non-deterministic Turing machine. A problem p in NP is NP-complete if every other problem in NP can be transformed (or reduced) into p in polynomial time.

It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem. But if any NP-complete problem can be solved quickly, then every problem in NP can, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every NP-complete problem (that is, it can be reduced in polynomial time). Because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general.

Formal definition[edit]

See also: formal definition for NP-completeness (article P = NP)

A decision problem  is NP-complete if:

  1.  is in NP, and

  2. Every problem in NP is reducible to  in polynomial time.[2]

 can be shown to be in NP by demonstrating that a candidate solution to  can be verified in polynomial time.

Note that a problem satisfying condition 2 is said to be NP-hard, whether or not it satisfies condition 1.[3]

A consequence of this definition is that if we had a polynomial time algorithm (on a UTM, or any other Turing-equivalent abstract machine) for , we could solve all problems in NP in polynomial time.

Background[edit]

The concept of NP-completeness was introduced in 1971 (see Cook–Levin theorem), though the term NP-complete was introduced later. At the 1971 STOC conference, there was a fierce debate between the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machineJohn Hopcroft brought everyone at the conference to a consensus that the question of whether NP-complete problems are solvable in polynomial time should be put off to be solved at some later date, since nobody had any formal proofs for their claims one way or the other. This is known as the question of whether P=NP.

Nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US$1 million reward to anyone who has a formal proof that P=NP or that P≠NP.

The Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete. In 1972, Richard Karp proved that several other problems were also NP-complete (see Karp's 21 NP-complete problems); thus there is a class of NP-complete problems (besides the Boolean satisfiability problem). Since the original results, thousands of other problems have been shown to be NP-complete by reductions from other problems previously shown to be NP-complete; many of these problems are collected in Garey and Johnson's 1979 book Computers and Intractability: A Guide to the Theory of NP-Completeness.[4]

NP-complete problems[edit]

Some NP-complete problems, indicating the reductions typically used to prove their NP-completeness

Main article: List of NP-complete problems

An interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices. Consider these two problems:

  • Graph Isomorphism: Is graph G1 isomorphic to graph G2?

  • Subgraph Isomorphism: Is graph G1 isomorphic to a subgraph of graph G2?

The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to be hard, but is not thought to be NP-complete.

The easiest way to prove that some new problem is NP-complete is first to prove that it is in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful to know a variety of NP-complete problems. The list below contains some well-known problems that are NP-complete when expressed as decision problems.

To the right is a diagram of some of the problems and the reductions typically used to prove their NP-completeness. In this diagram, problems are reduced from bottom to top. Note that this diagram is misleading as a description of the mathematical relationship between these problems, as there exists a polynomial-time reduction between any two NP-complete problems; but it indicates where demonstrating this polynomial-time reduction has been easiest.

There is often only a small difference between a problem in P and an NP-complete problem. For example, the 3-satisfiability problem, a restriction of the boolean satisfiability problem, remains NP-complete, whereas the slightly more restricted 2-satisfiability problem is in P (specifically, NL-complete), and the slightly more general max. 2-sat. problem is again NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with 3 colors is NP-complete, even when restricted to planar graphs. Determining if a graph is a cycle or is bipartite is very easy (in L), but finding a maximum bipartite or a maximum cycle subgraph is NP-complete. A solution of the knapsack problem within any fixed percentage of the optimal solution can be computed in polynomial time, but finding the optimal solution is NP-complete.

Solving NP-complete problems[edit]

At present, all known algorithms for NP-complete problems require time that is superpolynomial in the input size, and it is unknown whether there are any faster algorithms.

The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms:

  • Approximation: Instead of searching for an optimal solution, search for a solution that is at most a factor from an optimal one.

  • Randomization: Use randomness to get a faster average running time, and allow the algorithm to fail with some small probability. Note: The Monte Carlo method is not an example of an efficient algorithm in this specific sense, although evolutionary approaches like Genetic algorithms may be.

  • Restriction: By restricting the structure of the input (e.g., to planar graphs), faster algorithms are usually possible.

  • Parameterization: Often there are fast algorithms if certain parameters of the input are fixed.

  • Heuristic: An algorithm that works "reasonably well" in many cases, but for which there is no proof that it is both always fast and always produces a good result. Metaheuristic approaches are often used.

One example of a heuristic algorithm is a suboptimal  greedy coloring algorithm used for graph coloring during the register allocation phase of some compilers, a technique called graph-coloring global register allocation. Each vertex is a variable, edges are drawn between variables which are being used at the same time, and colors indicate the register assigned to each variable. Because most RISC machines have a fairly large number of general-purpose registers, even a heuristic approach is effective for this application.

Completeness under different types of reduction[edit]

In the definition of NP-complete given above, the term reduction was used in the technical meaning of a polynomial-time many-one reduction.

Another type of reduction is polynomial-time Turing reduction. A problem  is polynomial-time Turing-reducible to a problem  if, given a subroutine that solves  in polynomial time, one could write a program that calls this subroutine and solves  in polynomial time. This contrasts with many-one reducibility, which has the restriction that the program can only call the subroutine once, and the return value of the subroutine must be the return value of the program.

If one defines the analogue to NP-complete with Turing reductions instead of many-one reductions, the resulting set of problems won't be smaller than NP-complete; it is an open question whether it will be any larger.

Another type of reduction that is also often used to define NP-completeness is the logarithmic-space many-one reduction which is a many-one reduction that can be computed with only a logarithmic amount of space. Since every computation that can be done in logarithmic space can also be done in polynomial time it follows that if there is a logarithmic-space many-one reduction then there is also a polynomial-time many-one reduction. This type of reduction is more refined than the more usual polynomial-time many-one reductions and it allows us to distinguish more classes such as P-complete. Whether under these types of reductions the definition of NP-complete changes is still an open problem. All currently known NP-complete problems are NP-complete under log space reductions. All currently known NP-complete problems remain NP-complete even under much weaker reductions.[5] It is known, however, that AC0 reductions define a strictly smaller class than polynomial-time reductions.[6]

Naming[edit]

According to Donald Knuth, the name "NP-complete" was popularized by Alfred AhoJohn Hopcroft and Jeffrey Ullman in their celebrated textbook "The Design and Analysis of Computer Algorithms". He reports that they introduced the change in the galley proofs for the book (from "polynomially-complete"), in accordance with the results of a poll he had conducted of the theoretical computer science community.[7] Other suggestions made in the poll[8] included "Herculean", "formidable", Steiglitz's "hard-boiled" in honor of Cook, and Shen Lin's acronym "PET", which stood for "probably exponential time", but depending on which way the P versus NP problem went, could stand for "provably exponential time" or "previously exponential time".[9]

Common misconceptions[edit]

The following misconceptions are frequent.[10]

  • "NP-complete problems are the most difficult known problems." Since NP-complete problems are in NP, their running time is at most exponential. However, some problems provably require more time, for example Presburger arithmetic.

  • "NP-complete problems are difficult because there are so many different solutions." On the one hand, there are many problems that have a solution space just as large, but can be solved in polynomial time (for example minimum spanning tree). On the other hand, there are NP-problems with at most one solution that are NP-hard under randomized polynomial-time reduction (see Valiant–Vazirani theorem).

  • "Solving NP-complete problems requires exponential time." First, this would imply P ≠ NP, which is still an unsolved question. Further, some NP-complete problems actually have algorithms running in superpolynomial, but subexponential time such as O(2nn). For example, the independent set and dominating set problems for planar graphs are NP-complete, but can be solved in subexponential time using the planar separator theorem.[11]

  • "All instances of an NP-complete problem are difficult." Often some instances, or even most instances, may be easy to solve within polynomial time. However, unless P=NP, any polynomial-time algorithm must asymptotically be wrong on more than polynomially many of the exponentially many inputs of a certain size.[12]

  • "If P=NP, all cryptographic ciphers can be broken." A polynomial-time problem can be very difficult to solve in practice if the polynomial's degree or constants are large enough. For example, ciphers with a fixed key length, such as Advanced Encryption Standard, can all be broken in constant time (and are thus already known to be in P), though with current technology that constant may exceed the age of the universe. In addition, information-theoretic security provides cryptographic methods that cannot be broken even with unlimited computing power.

Properties[edit]

Viewing a decision problem as a formal language in some fixed encoding, the set NPC of all NP-complete problems is not closed under:

It is not known whether NPC is closed under complementation, since NPC=co-NPC if and only if NP=co-NP, and whether NP=co-NP is an open question.[13]

See also[edit]

References[edit]

Citations[edit]

  1. ^ Cobham, Alan (1965). "The intrinsic computational difficulty of functions". Proc. Logic, Methodology, and Philosophy of Science II. North Holland.

  2. ^ J. van Leeuwen (1998). Handbook of Theoretical Computer Science. Elsevier. p. 84. ISBN 978-0-262-72014-4.

  3. ^ J. van Leeuwen (1998). Handbook of Theoretical Computer Science. Elsevier. p. 80. ISBN 978-0-262-72014-4.

  4. ^ Garey, Michael R.Johnson, D. S. (1979). Victor Klee (ed.). Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co. pp. x+338ISBN 978-0-7167-1045-5MR 0519066.

  5. ^ Agrawal, M.; Allender, E.; Rudich, Steven (1998). "Reductions in Circuit Complexity: An Isomorphism Theorem and a Gap Theorem". Journal of Computer and System Sciences57 (2): 127–143. doi:10.1006/jcss.1998.1583ISSN 1090-2724.

  6. ^ Agrawal, M.; Allender, E.; Impagliazzo, R.; Pitassi, T.Rudich, Steven (2001). "Reducing the complexity of reductions". Computational Complexity10 (2): 117–138. doi:10.1007/s00037-001-8191-1ISSN 1016-3328.

  7. ^ Don Knuth, Tracy Larrabee, and Paul M. Roberts, Mathematical Writing Archived 2010-08-27 at the Wayback Machine § 25, MAA Notes No. 14, MAA, 1989 (also Stanford Technical Report, 1987).

  8. ^ Knuth, D. F. (1974). "A terminological proposal". SIGACT News6 (1): 12–18. doi:10.1145/1811129.1811130.

  9. ^ See the poll, or [1].

  10. ^ Ball, Philip. "DNA computer helps travelling salesman"doi:10.1038/news000113-10.

  11. ^ Bern (1990)Deĭneko, Klinz & Woeginger (2006)Dorn et al. (2005)Lipton & Tarjan (1980).

  12. ^ Hemaspaandra, L. A.; Williams, R. (2012). "SIGACT News Complexity Theory Column 76". ACM SIGACT News43 (4): 70. doi:10.1145/2421119.2421135.

  13. ^ Talbot, John; Welsh, D. J. A. (2006), Complexity and Cryptography: An Introduction, Cambridge University Press, p. 57, ISBN 9780521617710The question of whether NP and co-NP are equal is probably the second most important open problem in complexity theory, after the P versus NP question.

Sources[edit]

Further reading[edit]

Categories