Search results
Results from the WOW.Com Content Network
Deep linguistic processing is a natural language processing framework which draws on theoretical and descriptive linguistics. It models language predominantly by way of theoretical syntactic/semantic theory (e.g. CCG , HPSG , LFG , TAG , the Prague School ).
Depth of processing falls on a shallow to deep continuum. [citation needed] Shallow processing (e.g., processing based on phonemic and orthographic components) leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing (e.g., semantic processing) results in a more durable memory trace. [1]
Deep processing involves semantic processing, which happens when we encode the meaning of a word and relate it to similar words. Shallow processing involves structural and phonemic recognition, the processing of sentence and word structure, i.e. first-order logic , and their associated sounds.
For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to). Dialog system –
These types of inferences are also referred to as "bridging inferences." For example, if a reader came across the following sentences together, they would need to have inferred that the sentences are related to one-another if they are to make any sense of the text as a whole: "Mary poured the water on the bonfire. The fire went out."
Semantic processing is the deepest level of processing and it requires the listener to think about the meaning of the cue. Studies on brain imaging have shown that, when semantic processing occurs, there is increased brain activity in the left prefrontal regions of the brain that does not occur during different kinds of processing. One study ...
Many languages allow generic copying by one or either strategy, defining either one copy operation or separate shallow copy and deep copy operations. [1] Note that even shallower is to use a reference to the existing object A, in which case there is no new object, only a new reference. The terminology of shallow copy and deep copy dates to ...
These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space , typically of several hundred dimensions , with each unique word in the corpus being assigned a corresponding vector in the space.