|
|
At the bottom-level interface to RPC, the application can control all options, transport-related and otherwise. clnt_tli_create, and the other expert-level RPC interface routines are implemented on top of these bottom-level routines.
The programmer should not normally be using these low-level routines.
These routines are responsible for creating their own data structures, their own buffer management, the creation of their own RPC headers, and so on.
Callers of these routines (such as
the expert level routine
clnt_tli_create)
are responsible for initializing the
cl_netid
and
cl_tp
fields within the client handle.
The bottom level routines
clnt_dg_create
and
clnt_vc_create
are themselves responsible for populating the
clnt_ops
and
cl_private
fields.
For a created handle,
cl_netid
is the network identifier (for example,
udp)
of the transport and
cl_tp
is the device name of that transport (for example,
/dev/udp).
The example here shows the use of local variables to control the exact details of the calls to clnt_vc_create and clnt_dg_create. Thus, these routines allow control of the transport to the lowest level:
switch (tinfo.servtype) { case T_COTS: case T_COTS_ORD: cl = clnt_vc_create(fd, svcaddr, prog, vers, sendsz, recvsz); break; case T_CLTS: cl = clnt_dg_create(fd, svcaddr, prog, vers, sendsz, recvsz); break; default: goto err; }
And, again, on the server side:
/* call transport specific function. */switch(tinfo.servtype) { case T_COTS_ORD: case T_COTS: xprt = svc_vc_create(fd, sendsz, recvsz); break;
case T_CLTS: xprt = svc_dg_create(fd, sendsz, recvsz); break; default: goto err; }