tokens_values.qbk 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208
  1. [/==============================================================================
  2. Copyright (C) 2001-2011 Joel de Guzman
  3. Copyright (C) 2001-2011 Hartmut Kaiser
  4. Distributed under the Boost Software License, Version 1.0. (See accompanying
  5. file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
  6. ===============================================================================/]
  7. [section:lexer_token_values About Tokens and Token Values]
  8. As already discussed, lexical scanning is the process of analyzing the stream
  9. of input characters and separating it into strings called tokens, most of the
  10. time separated by whitespace. The different token types recognized by a lexical
  11. analyzer often get assigned unique integer token identifiers (token ids). These
  12. token ids are normally used by the parser to identify the current token without
  13. having to look at the matched string again. The __lex__ library is not
  14. different with respect to this, as it uses the token ids as the main means of
  15. identification of the different token types defined for a particular lexical
  16. analyzer. However, it is different from commonly used lexical analyzers in the
  17. sense that it returns (references to) instances of a (user defined) token class
  18. to the user. The only limitation of this token class is that it must carry at
  19. least the token id of the token it represents. For more information about the
  20. interface a user defined token type has to expose please look at the
  21. __sec_ref_lex_token__ reference. The library provides a default
  22. token type based on the __lexertl__ library which should be sufficient in most
  23. cases: the __class_lexertl_token__ type. This section focusses on the
  24. description of general features a token class may implement and how this
  25. integrates with the other parts of the __lex__ library.
  26. [heading The Anatomy of a Token]
  27. It is very important to understand the difference between a token definition
  28. (represented by the __class_token_def__ template) and a token itself (for
  29. instance represented by the __class_lexertl_token__ template).
  30. The token definition is used to describe the main features of a particular
  31. token type, especially:
  32. * to simplify the definition of a token type using a regular expression pattern
  33. applied while matching this token type,
  34. * to associate a token type with a particular lexer state,
  35. * to optionally assign a token id to a token type,
  36. * to optionally associate some code to execute whenever an instance of this
  37. token type has been matched,
  38. * and to optionally specify the attribute type of the token value.
  39. The token itself is a data structure returned by the lexer iterators.
  40. Dereferencing a lexer iterator returns a reference to the last matched token
  41. instance. It encapsulates the part of the underlying input sequence matched by
  42. the regular expression used during the definition of this token type.
  43. Incrementing the lexer iterator invokes the lexical analyzer to
  44. match the next token by advancing the underlying input stream. The token data
  45. structure contains at least the token id of the matched token type,
  46. allowing to identify the matched character sequence. Optionally, the token
  47. instance may contain a token value and/or the lexer state this token instance
  48. was matched in. The following [link spirit.lex.tokenstructure figure] shows the
  49. schematic structure of a token.
  50. [fig tokenstructure.png..The structure of a token..spirit.lex.tokenstructure]
  51. The token value and the lexer state the token has been recognized in may be
  52. omitted for optimization reasons, thus avoiding the need for the token to carry
  53. more data than actually required. This configuration can be achieved by supplying
  54. appropriate template parameters for the
  55. __class_lexertl_token__ template while defining the token type.
  56. The lexer iterator returns the same token type for each of the different
  57. matched token definitions. To accommodate for the possible different token
  58. /value/ types exposed by the various token types (token definitions), the
  59. general type of the token value is a __boost_variant__. At a minimum (for the
  60. default configuration) this token value variant will be configured to always
  61. hold a __boost_iterator_range__ containing the pair of iterators pointing to
  62. the matched input sequence for this token instance.
  63. [note If the lexical analyzer is used in conjunction with a __qi__ parser, the
  64. stored __boost_iterator_range__ token value will be converted to the
  65. requested token type (parser attribute) exactly once. This happens at the
  66. time of the first access to the token value requiring the
  67. corresponding type conversion. The converted token value will be stored
  68. in the __boost_variant__ replacing the initially stored iterator range.
  69. This avoids having to convert the input sequence to the token value more
  70. than once, thus optimizing the integration of the lexer with __qi__, even
  71. during parser backtracking.
  72. ]
  73. Here is the template prototype of the __class_lexertl_token__ template:
  74. template <
  75. typename Iterator = char const*,
  76. typename AttributeTypes = mpl::vector0<>,
  77. typename HasState = mpl::true_
  78. >
  79. struct lexertl_token;
  80. [variablelist where:
  81. [[Iterator] [This is the type of the iterator used to access the
  82. underlying input stream. It defaults to a plain
  83. `char const*`.]]
  84. [[AttributeTypes] [This is either a mpl sequence containing all
  85. attribute types used for the token definitions or the
  86. type `omit`. If the mpl sequence is empty (which is
  87. the default), all token instances will store a
  88. __boost_iterator_range__`<Iterator>` pointing to the start
  89. and the end of the matched section in the input stream.
  90. If the type is `omit`, the generated tokens will
  91. contain no token value (attribute) at all.]]
  92. [[HasState] [This is either `mpl::true_` or `mpl::false_`, allowing
  93. control as to whether the generated token instances will
  94. contain the lexer state they were generated in. The
  95. default is mpl::true_, so all token instances will
  96. contain the lexer state.]]
  97. ]
  98. Normally, during construction, a token instance always holds the
  99. __boost_iterator_range__ as its token value, unless it has been defined
  100. using the `omit` token value type. This iterator range then is
  101. converted in place to the requested token value type (attribute) when it is
  102. requested for the first time.
  103. [heading The Physiognomy of a Token Definition]
  104. The token definitions (represented by the __class_token_def__ template) are
  105. normally used as part of the definition of the lexical analyzer. At the same
  106. time a token definition instance may be used as a parser component in __qi__.
  107. The template prototype of this class is shown here:
  108. template<
  109. typename Attribute = unused_type,
  110. typename Char = char
  111. >
  112. class token_def;
  113. [variablelist where:
  114. [[Attribute] [This is the type of the token value (attribute)
  115. supported by token instances representing this token
  116. type. This attribute type is exposed to the __qi__
  117. library, whenever this token definition is used as a
  118. parser component. The default attribute type is
  119. `unused_type`, which means the token instance holds a
  120. __boost_iterator_range__ pointing to the start
  121. and the end of the matched section in the input stream.
  122. If the attribute is `omit` the token instance will
  123. expose no token type at all. Any other type will be
  124. used directly as the token value type.]]
  125. [[Char] [This is the value type of the iterator for the
  126. underlying input sequence. It defaults to `char`.]]
  127. ]
  128. The semantics of the template parameters for the token type and the token
  129. definition type are very similar and interdependent. As a rule of thumb you can
  130. think of the token definition type as the means of specifying everything
  131. related to a single specific token type (such as `identifier` or `integer`).
  132. On the other hand the token type is used to define the general properties of all
  133. token instances generated by the __lex__ library.
  134. [important If you don't list any token value types in the token type definition
  135. declaration (resulting in the usage of the default __boost_iterator_range__
  136. token type) everything will compile and work just fine, just a bit
  137. less efficient. This is because the token value will be converted
  138. from the matched input sequence every time it is requested.
  139. But as soon as you specify at least one token value type while
  140. defining the token type you'll have to list all value types used for
  141. __class_token_def__ declarations in the token definition class,
  142. otherwise compilation errors will occur.
  143. ]
  144. [heading Examples of using __class_lexertl_token__]
  145. Let's start with some examples. We refer to one of the __lex__ examples (for
  146. the full source code of this example please see
  147. [@../../example/lex/example4.cpp example4.cpp]).
  148. [import ../example/lex/example4.cpp]
  149. The first code snippet shows an excerpt of the token definition class, the
  150. definition of a couple of token types. Some of the token types do not expose a
  151. special token value (`if_`, `else_`, and `while_`). Their token value will
  152. always hold the iterator range of the matched input sequence. The token
  153. definitions for the `identifier` and the integer `constant` are specialized
  154. to expose an explicit token type each: `std::string` and `unsigned int`.
  155. [example4_token_def]
  156. As the parsers generated by __qi__ are fully attributed, any __qi__ parser
  157. component needs to expose a certain type as its parser attribute. Naturally,
  158. the __class_token_def__ exposes the token value type as its parser attribute,
  159. enabling a smooth integration with __qi__.
  160. The next code snippet demonstrates how the required token value types are
  161. specified while defining the token type to use. All of the token value types
  162. used for at least one of the token definitions have to be re-iterated for the
  163. token definition as well.
  164. [example4_token]
  165. To avoid the token to have a token value at all, the special tag `omit` can
  166. be used: `token_def<omit>` and `lexertl_token<base_iterator_type, omit>`.
  167. [endsect]