Post by Gary Spiveywhat I don't understand is what the Verilog rules
could be?
They are fairly complex. They generally give you "natural" results, so
you don't have to think about them in most situations. For example, in
your original code, z just gets a wider version of ~a. It is only
because you tried to figure out exactly what the details were that it
seemed strange to you. But when you do need to understand the exact
details, it gets complicated.
Post by Gary SpiveyIt would seem that all expressions on the RHS would have to be
evaluated BEFORE being expanded and assigned to the LHS?
Verilog first determines the width of an expression based on the
largest operand in it, including the LHS of any assignment. Then it
extends all operands (actually, all context-determined operands) to the
width of their expression before performing any operations. All
extensions are done as early as possible, in an attempt to avoid
overflows in intermediate results.
For example, if you multiply two 16-bit values together and assign the
result to a 32-bit variable, it will extend the two factors to 32 bits
before the multiply. This gives the equivalent of a full 16x16=32bit
multiply. If it didn't do this, you would just get a 16-bit result
extended to 32 bits, which is not what you want.
Post by Gary SpiveyI haven't checked
it out, but what would a reduction operator do? If I had a= 4'b1111 and said
z= &a;
would it zero-pad a to 8 bits and then set z to 0 via the reduction?
Operands of some operators are specified to be context-determined, and
others to be self-determined. Context-determined operands get their
width from the expression around them. Self-determined operands of an
operator are independent of the expression around them, and compute
their width only based on things in that operand sub-expression.
Which operands are specified to work each way makes a fair amount of
sense. If you are going to AND together two vectors, they need to be
the same width because they are going to be combined in a bitwise
fashion. But if you are shifting one number by another number, there
is no reason that the shift count needs to be the same width as the
value being shifted. Their bits do not directly combine. So a shift
count is a self-determined operand.
When the result of an operation will have a fixed width regardless of
the width of its operand, there is no reason for the operand to care
about the width of the context. This is true of the reduction
operators. The result will always be 1 bit, no matter what the operand
width is. There is no point in extending the operand. Instead, the
1-bit result will be extended to the width of the expression as soon as
it is produced. The operand of a reduction operator is
self-determined.
On the other hand, all the bits of a bitwise NOT will be available to
the expression containing it, and may be assigned or used in another
bitwise operation. So the operand of a bitwise NOT is
context-determined.
It
Post by Gary Spiveywould seem to me that it should have to do the reduction first, and then set
z to 1. - AND, I just checked it in ModelSim and this IS what happens. So,
why does the reduction operator get evaluated BEFORE the zero-pad, but the ~
operator gets evaluated AFTER the zero-pad. They are BOTH unary operators.
But one has a self-determined operand and the other has a
context-determined operand. The Verilog LRM fully specifies this for
all the operands of all the operators. They generally follow a logical
scheme that makes sense. And again, in most cases they are designed to
give reasonable results, so that most users don't have to worry about
them.