Archived

This forum has been archived. Please start a new discussion on GitHub.

Ice for non-Intel systems (SPARC)

As an exercise, I have tried building Ice on a Linux/Sparc SMP system, gcc-3.2. The following comments might be of interest.

1. For the build to get anyplace at all, I had to add to CXXFLAGS
-Di386 -DICE_USE_MUTEX_SHARED=1 (in config/Make.rules)
The first is a lie to keep some tests in header files happy, and the second is
to avoid intel assembler.

2. In any files which included anything from xerces/..., in "appropriate places" I
had to add the macro call
XERCES_CPP_NAMESPACE_USE
in order to get "using namespace xerces;" which is how xerces built itself.
("Appropriate" for me was after other "using..." in .cpp files, and in '.h' files thus:
namespace <whatever> {
XERCES_CPP_NAMESPACE_USE
)

The first of these is a work-around, but the second is probably a good idea in general
since the macro is empty if it is not required.

With these changes, Ice builds cleanly, and all tests run except for Glacier/startup.

Comments or suggestions?

Comments

  • Re: Ice for non-Intel systems (SPARC)

    Thanks a lot for your feedback!
    Originally posted by fmccor
    With these changes, Ice builds cleanly, and all tests run except for Glacier/startup.

    Comments or suggestions?

    I'm afraid I don't know what the problem could be, without having a Solaris system available.

    However, we just decided to purchase Sun hardware asap, so we will officially support Ice on Sun/Solaris in one of the next versions.
  • It's a Sparc/Linux system. Making the compile work on Solaris would be harder,
    although I might try it.

    I'm pretty sure the problem I see with the Glacier test is with my system configuration.
    I'll look at it, but it's not very high priority for me. Maybe I'll actually read the Ice
    documentation first...

    Regards,
  • Originally posted by fmccor
    It's a Sparc/Linux system. Making the compile work on Solaris would be harder,
    although I might try it.

    I also tried a Solaris-sparc build. It failed because there is no UUID library installed as std on Solaris.

    -apm
  • The following code fragment is useful for xercesc compatibility:

    #if _XERCES_VERSION >= 20200
    XERCES_CPP_NAMESPACE_USE
    #endif
  • I'll respond to myself with a little more information on the test/Glacier/starter test
    failure I noted.

    1. Configuration as originally described;
    2. All tests but this one run fine;
    3. All demos seem to run fine;
    4. The failure with the Glacier test is a random time-out. That is, less often than not,
    the test runs successfully to completion. Typical output looks like
    ====================================================
    ferris@lacewing:test/Glacier/starter [666]% run.py
    starting glacier starter... ok
    starting server... ok
    starting client... ok
    ../../../test/Glacier/starter/client: warning: connection exception:
    Outgoing.cpp:124: Ice::TimeoutException:
    timeout while sending or receiving data
    local address = 127.0.0.1:35081
    remote address = 127.0.0.1:12346
    ../../../test/Glacier/starter/client: Outgoing.cpp:124: Ice::TimeoutException:
    timeout while sending or receiving data
    ../../../bin/glacierstarter: warning: connection exception:
    SslTransceiver.cpp:255: Ice::ConnectionLostException:
    connection lost: recv() returned zero
    local address = 127.0.0.1:12346
    remote address = 127.0.0.1:35081
    creating and activating callback receiver adapter... ok
    creating and adding callback receiver object... ok
    testing stringToProxy for glacier starter... ok
    testing checked cast for glacier starter... ok
    starting up glacier router...
    ferris@lacewing:test/Glacier/starter [667]%
    ====================================================

    5. System is a relatively slow multiprocessor (Ultra2 2x300) sparc linux system,
    kernel 2.4.20.
  • Our Sun is ordered, and will hopefully arrive somtime next week. Therefore expect Solaris to be officially supported quite soon.
  • Ferris,

    Can you try my i386 patch which I posted in the
    patches Forum?
  • Yes, that's what I was talking about. On SPARC,
    #define SIZEOF_WCHAR_T 4
    seems to be what you want, so I disable that check. (At least, it uses
    #define __WCHAR_MAX (2147483647)
    so the size had better be 4)
  • Cool.

    We can leave the check in there, but add a little macro magic for
    sparc.

    Can you create an empty file, a.cpp, then do:

    gcc -v a.cpp

    and post all of the compiler flags that are displayed?
    Then I will know for sure what macro to use.

    :D


    I know everyone hates autoconf, but some type of scripted platform inspection
    step is useful for this kind of drudgery.
  • The flags follow, but what you will want is
    #ifdef __sparc__

    ==============
    gcc version 3.2.1 20021207 (Gentoo Linux 3.2.1-20021207)
    /usr/lib/gcc-lib/sparc-unknown-linux-gnu/3.2.1/cc1plus -v -D__GNUC__=3 -D__GNUC_MINOR__=2 -D__GNUC_PATCHLEVEL__=1 -D__GXX_ABI_VERSION=102 -D__ELF__ -Dunix -D__sparc__ -D__gnu_linux__ -Dlinux -D__ELF__ -D__unix__ -D__sparc__ -D__gnu_linux__ -D__linux__ -D__unix -D__linux -Asystem=unix -Asystem=posix -D__NO_INLINE__ -D__STDC_HOSTED__=1 -D_GNU_SOURCE -D__GCC_NEW_VARARGS__ -Acpu=sparc -Amachine=sparc e.cpp -D__GNUG__=3 -D__DEPRECATED -D__EXCEPTIONS -quiet -dumpbase e.cpp -version -o /tmp/cc3PASLu.s
    ==============

    I am rebuilding now without -Di386 and with the "&& defined(i386)" tests removed.
    Compiles are going fine, so far (this is Ice-1.0.1). Note that you still need
    -DICE_USE_MUTEX_SHARED=1
    on sparc. There is assembler code around for this, like on intel, but I am not sure that
    it works on multiprocessors, and for now I don't much care.

    As noted previously, this version seems to do all tests and demos correctly except
    for the Glacier test, which fails nondeterministically with a time out. It is possible that
    this system is really timing out because it is slower than the test allows for. (For
    me in the sparc world, a 2x300 multiprocessor seems fast. In CPU terms, though,
    it is quite a bit slower than any intel based system you will ever run in to.)
  • And, for completeness, here is what a typical compile call looks like:

    c++ -c -I.. -I/opt/openssl/include -I/home1/ferris/Packages/ICE/xerces-c-src2_2_0/include -I../../include -O2 -pipe -mcpu=v8 -mtune=ultrasparc -DNDEBUG -ftemplate-depth-128 -fPIC -Wno-deprecated -fpermissive -DICE_USE_MUTEX_SHARED=1 SslEndpoint.cpp

    Sorry for the extra post.
  • Originally posted by fmccor
    Compiles are going fine, so far (this is Ice-1.0.1). Note that you still need -DICE_USE_MUTEX_SHARED=1 on sparc. There is assembler code around for this, like on intel, but I am not sure that
    it works on multiprocessors, and for now I don't much care.

    I have emailed marc with the assembler code needed for sparc. It should be ok for multi-processors.

    -apm
  • If you could send me the sparc assembly you are using, or tell me where you
    got it, I can try it with a sparc/linux Ultra-2 multiprocessor. (with Ice-1.0.1)
  • Originally posted by fmccor
    If you could send me the sparc assembly you are using, or tell me where you
    got it, I can try it with a sparc/linux Ultra-2 multiprocessor. (with Ice-1.0.1)

    The code I wrote is specifically for the Sparc hardware. Intel code is already in Ice. I was working in a Solaris Forte6.2 environment. There is much porting work required here and I can no longer spend any time on it. I have sent my patches to the Ice developers.

    -apm
  • I now have assembler code which seems to work on sparc-v9/linux systems for the
    atomic operations. It is basically David Miller's sparc 64 linux kernel code repackaged
    to live in a little file (IceAtomic.c) in the <src/IceUtil> directory. It should work on
    sparc-v9/solaris systems, but I don't have a solaris Ice port to try it with.
    It is available in "Patches > SPARC mutex assembler: patches & new code" and
    attached to this. (If I had realized I could attach a file outside the Patch threads,
    I would not have put it there. It really should just be here because it is for comment;
    it is not for general use. Appolgies for denseness.)

    The patch file changes:
    config/Make.rules <<< New flags and rules to enable sparc assembler
    include/IceUtil/Shared.h <<< Defines for sparc assembler
    src/IceUtil/Makefile <<< Compile IceAtomic.c if the environment is right.
    src/IceUtil/IceAtomic.c <<< New file >>> Inplement __atomic_add, etc
    src/icecpp/config.h <<< (allow sparc architecture)

    This will NOT work on sparc-v8 (SS20) or earlier because it uses instructions which do
    not exist prior to sparc version 9.

    With this enabled, Ice for me passes all tests except for my random Glacier/starter
    timeout, but I don't have a proof of correctness for the code, nor do I have an extensive
    stress test. For reference, the system on which it works describes itself with
    uname -srvmpio as:

    Linux 2.4.20-sparc-r0 #3 SMP Fri Jan 3 15:56:09 UTC 2003 sparc64 sun4u TI UltraSparc II (BlackBird) GNU/Linux

    I am submitting this as an example or strawman only, and note again that the code
    is owned by the linux kernel, and it carries David S. Miller's copyright.