lexer_static_model.qbk 7.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
  1. [/==============================================================================
  2. Copyright (C) 2001-2011 Joel de Guzman
  3. Copyright (C) 2001-2011 Hartmut Kaiser
  4. Distributed under the Boost Software License, Version 1.0. (See accompanying
  5. file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  6. ===============================================================================/]
  7. [section:lexer_static_model The /Static/ Lexer Model]
  8. The documentation of __lex__ so far mostly was about describing the features of
  9. the /dynamic/ model, where the tables needed for lexical analysis are generated
  10. from the regular expressions at runtime. The big advantage of the dynamic model
  11. is its flexibility, and its integration with the __spirit__ library and the C++
  12. host language. Its big disadvantage is the need to spend additional runtime to
  13. generate the tables, which especially might be a limitation for larger lexical
  14. analyzers. The /static/ model strives to build upon the smooth integration with
  15. __spirit__ and C++, and reuses large parts of the __lex__ library as described
  16. so far, while overcoming the additional runtime requirements by using
  17. pre-generated tables and tokenizer routines. To make the code generation as
  18. simple as possible, the static model reuses the token definition types developed
  19. for the /dynamic/ model without any changes. As will be shown in this
  20. section, building a code generator based on an existing token definition type
  21. is a matter of writing 3 lines of code.
  22. Assuming you already built a dynamic lexer for your problem, there are two more
  23. steps needed to create a static lexical analyzer using __lex__:
  24. # generating the C++ code for the static analyzer (including the tokenization
  25. function and corresponding tables), and
  26. # modifying the dynamic lexical analyzer to use the generated code.
  27. Both steps are described in more detail in the two sections below (for the full
  28. source code used in this example see the code here:
  29. [@../../example/lex/static_lexer/word_count_tokens.hpp the common token definition],
  30. [@../../example/lex/static_lexer/word_count_generate.cpp the code generator],
  31. [@../../example/lex/static_lexer/word_count_static.hpp the generated code], and
  32. [@../../example/lex/static_lexer/word_count_static.cpp the static lexical analyzer]).
  33. [import ../example/lex/static_lexer/word_count_tokens.hpp]
  34. [import ../example/lex/static_lexer/word_count_static.cpp]
  35. [import ../example/lex/static_lexer/word_count_generate.cpp]
  36. But first we provide the code snippets needed to further understand the
  37. descriptions. Both, the definition of the used token identifier and the of the
  38. token definition class in this example are put into a separate header file to
  39. make these available to the code generator and the static lexical analyzer.
  40. [wc_static_tokenids]
  41. The important point here is, that the token definition class is not different
  42. from a similar class to be used for a dynamic lexical analyzer. The library
  43. has been designed in a way, that all components (dynamic lexical analyzer, code
  44. generator, and static lexical analyzer) can reuse the very same token definition
  45. syntax.
  46. [wc_static_tokendef]
  47. The only thing changing between the three different use cases is the template
  48. parameter used to instantiate a concrete token definition. For the dynamic
  49. model and the code generator you probably will use the __class_lexertl_lexer__
  50. template, where for the static model you will use the
  51. __class_lexertl_static_lexer__ type as the template parameter.
  52. This example not only shows how to build a static lexer, but it additionally
  53. demonstrates how such a lexer can be used for parsing in conjunction with a
  54. __qi__ grammar. For completeness, we provide the simple grammar used in this
  55. example. As you can see, this grammar does not have any dependencies on the
  56. static lexical analyzer, and for this reason it is not different from a grammar
  57. used either without a lexer or using a dynamic lexical analyzer as described
  58. before.
  59. [wc_static_grammar]
  60. [heading Generating the Static Analyzer]
  61. The first additional step to perform in order to create a static lexical
  62. analyzer is to create a small stand alone program for creating the lexer tables
  63. and the corresponding tokenization function. For this purpose the __lex__
  64. library exposes a special API - the function __api_generate_static__. It
  65. implements the whole code generator, no further code is needed. All what it
  66. takes to invoke this function is to supply a token definition instance, an
  67. output stream to use to generate the code to, and an optional string to be used
  68. as a suffix for the name of the generated function. All in all just a couple
  69. lines of code.
  70. [wc_static_generate_main]
  71. The shown code generator will generate output, which should be stored in a file
  72. for later inclusion into the static lexical analyzer as shown in the next
  73. topic (the full generated code can be viewed
  74. [@../../example/lex/static_lexer/word_count_static.hpp here]).
  75. [note The generated code will have compiled in the version number of the
  76. current __lex__ library. This version number is used at compilation time
  77. of your static lexer object to ensure this is compiled using exactly the
  78. same version of the __lex__ library as the lexer tables have been
  79. generated with. If the versions do not match you will see an compilation
  80. error mentioning an `incompatible_static_lexer_version`.
  81. ]
  82. [heading Modifying the Dynamic Analyzer]
  83. The second required step to convert an existing dynamic lexer into a static one
  84. is to change your main program at two places. First, you need to change the
  85. type of the used lexer (that is the template parameter used while instantiating
  86. your token definition class). While in the dynamic model we have been using the
  87. __class_lexertl_lexer__ template, we now need to change that to the
  88. __class_lexertl_static_lexer__ type. The second change is tightly related to
  89. the first one and involves correcting the corresponding `#include` statement to:
  90. [wc_static_include]
  91. Otherwise the main program is not different from an equivalent program using
  92. the dynamic model. This feature makes it easy to develop the lexer in dynamic
  93. mode and to switch to the static mode after the code has been stabilized.
  94. The simple generator application shown above enables the integration of the
  95. code generator into any existing build process. The following code snippet
  96. provides the overall main function, highlighting the code to be changed.
  97. [wc_static_main]
  98. [important The generated code for the static lexer contains the token ids as
  99. they have been assigned, either explicitly by the programmer or
  100. implicitly during lexer construction. It is your responsibility
  101. to make sure that all instances of a particular static lexer
  102. type use exactly the same token ids. The constructor of the lexer
  103. object has a second (default) parameter allowing it to designate a
  104. starting token id to be used while assigning the ids to the token
  105. definitions. The requirement above is fulfilled by default
  106. as long as no `first_id` is specified during construction of the
  107. static lexer instances.
  108. ]
  109. [endsect]