I think I've got your point now. Yes, your constraint is not being enforced.
There's a fair bit of information at the help file: tutorial/ConstrainedOptimizationGlobalNumerical
but one part particularly relevant to you is "Constraints are generally enforced by adding penalties when points leave the feasible region."
This is standard for many algorithms that solve constrained optimization problems: they solve the problem, while applying a penalty to the solution to account for the variables following the constraints. In practice this is often better than trying to apply a 'hard' constraint before evaluating. For one reason, if an optimum exists on a constrained boundary, it can be converged towards from multiple directions.
So your issue is that with the (presumably default) method, the values returned by g (and the differences between returned values of different trial variables) are so large that in selecting new points to trial the method chooses an inappropriate value for one of the variables.
You can fix this by using a different method (there are 4 methods described at the above link), and different options for the chosen method. The best choice depends on the nature of your function 'g': does it give useful gradient information etc.? Is it convex? Stochastic?
It's difficult to make a suggestion with knowing about 'g', but a start would be to modify 'g' with some function such as the Abs function as described in my previous post (to keep you safe in case of temporarily testing out-of-bounds values), in conjunction with a very large PenaltyFunction, small Tolerance, and large InitialPoints.