The FindMagnumBindings so far worked for bindings as a subprojects due
to some weird magic, but as of 2bcc7b94d3
it no longer does, which is how it should be as no such target was
created by the subproject buildsystem at all until now.
The wording was so insufficient that it made people think it's a fatal
error, and subsequently made them suspicious because it seemed like the
fatal error is ignored.
On multi-config builds this was putting __init__.py into a directory
named $<CONFIG>. Which, funnily enough, worked on Linux, but still caused
an issue when actually installing the package, as __init__.py was then
missing from the proper location. On Windows this blew up when attempting
to create that directory in the first place.
This is broken since e5e7824b96, and this
fix is what that commit should have been instead -- adding the `.in`
suffix to the configure_file() output as well to prevent it from being
interpreted by Python.
That commit is from January, and I'm terribly sorry for this regression
being here for so long. The reason it went unnoticed is that none of the
CI jobs use a multi-config build (which I ultimately have to fix) and my
local Ninja Multi-Config build directory worked only because it was
containing the original __init__.py file from before that commit, and I
didn't recreate the build directory since. Heh.
This was originally added in d6fec89dc5 as
a doc-generation-only hack, but other tools such as stub generation may
need similar special cases, so it's now a env var check in the binding
generation directly.
It's not a check in every invocation because that *feels* slow (although
pybind11 itself likely does a lot more nasty string comparisons, hashmap
lookups and linked link traversal than that), so if such an env var was
defined while importing the module, the current() is then forever
broken, until interpreter restart.
SceneContents.FOR() uses it as an argument. Wasn't caught by the doc
generator, and looks like it didn't break testing either, probably
because this is an overloaded function and when an overload gets picked,
the types are already defined. Or something. Not sure.
It got imported in this order for doc generation and probably also in
all tests, but when imported alone, the signature of copy() is broken
because it references a not-yet-known type.
This now causes construction of SceneFieldData from a 2D view to do
`import numpy` internally because of some extremely crazy internal
behavior (as shown in the now-deleted comment in the code). Turns out
everything still works even without marking the types implicitly
convertible from py::array (as it should, anyway), so I suspect that was
only needed long time ago for some strange reason, or maybe on some
older and no longer supported pybind11 version.
This reverts commit eb6576c6af.
Yay, finally it's (almost) possible to create custom meshes from pure
Python. Except that there always has to be some initial mesh to add
attributes to, and it's not yet possible to supply index data there.
This finally makes it possible to expose APIs that take StridedArrayView
instances as an input, until now the type information was always lost,
making all views plain bytes and thus impossible to check whether the
types passed were a large enough size at least, if nothing else.
Preserving the type means there has to be type-dependent implementation
for __getitem__() and __setitem__(). So far this is only done for the
very basic builtin types, similarly to what Python's own array supports.
In the buffer protocol it used to advertise untyped data with B as the
format string, but the __getitem__ and __setitem__ were using the char
type (implicitly coming from the fact that the type exposed is
ArrayView<char>, StridedArrayViewND<char> or their const variants),
resulting in the data being treated as characters by Python. Which
was extremely annoying and inconsistent with how the bytes and bytearray
behaves.
Now ArrayView bindings always operate with std::uint8_t, and for
StridedArrayView there's a special case for the <char> type, which makes
it treated as std::uint8_t as well. Furthermore, to hint that the <char>
is "general data", the format string for it is null / None instead of B.
Causes problems when running tests with multi-config (Ninja) builds, as
the corrade module is then attempted to be imported from a directory
where __init__.py is, but not the actual binaries.
If enabled, this causes sys.setdlopenflags() being called with
RTLD_GLOBAL before the native Corrade module is loaded, in a hope to
resolve recurring nightmares with static Corrade and Magnum libraries
being linked into multiple dynamic modules.
Originally those were assertions that were kept even in release builds,
which meant that calling math.angle() on non-normalized vectors aborted
the whole Python interpreted. Not great. But then the assertions were
made debug-only, which means invalid usage from Python (where the
bindings are usually only built as Release) now silently gives back a
wrong result, which is perhaps even worse.
Because the Python overhead is already massive due to all string lookup
and such, doing one more check in the implementations isn't really going
to slow down anything. Thus I'm mirroring all (debug-only) Magnum
assertions on the Python side, turning them into exceptions. With proper
messages as well, because those are extremely useful.